Life StyleTechWorld

A Collection of Interesting Networks and Technologies Aiming at the Re-decentralization of the Internet

The Internet we have today is broken. We do not control our data, nor do we have a native securities settlement layer. Thirty years after the massive adoption of the Internet, our data architectures continue to be based on the concept of autonomous computers, where data is stored and managed centrally on a server, and is sent or retrieved by a client. Every time we interact over the Internet, copies of our data are sent to a service provider’s server, and every time that happens, we lose control over our data. Although we live in a connected world, with more and more devices connected to the Internet -including our watches, cars, televisions and refrigerators-, our data is still stored centrally: on our computers or other devices, on USB memory and even on the cloud. This raises questions of trust. Can I trust the people and institutions that store and manage my data against any form of corruption, internal or external, on purpose or by accident?

Every time we interact over the Internet, copies of our lives are made and sent to the other computer, and when this happens, we lose control over our data at the other end of the Internet, behind the walled gardens of a server. This is not only an issue when it comes to the privacy of our personal data, but it also leads to many inefficiencies in the backend of operations along the supply chain of goods and services. Today’s Internet – with its client-server based data infrastructure and centralized data management – has many single points of failure, as we can see from recurring data breaches by online service providers. In addition, it produces high costs of document management, as well as non-transparency throughout the supply chain of goods and services.

These questions have historical roots. First we had the computer, and then the Internet was invented, connecting these autonomous computers to each other using a data transmission protocol. In the early days of personal computers, we would save data to a floppy disk, eject it, reach out to the person who needed the file, and copy it to their computer so they could use it. If that person was in another country, the diskette had to be mailed to him. The Internet and the appearance of the WWW put an end to this situation by providing a data transmission protocol -TCP / IP- that expedited data transfer and greatly reduced the transaction costs of information exchange. Ten years later, the Internet became more mature and programmable. We are witnessing the emergence of the so-called Web2, which brought us social media and e-commerce platforms. Web2 revolutionized social interactions, bringing together producers and consumers of information, goods and services, and allowed us to enjoy P2P interactions on a global scale, but always with an intermediary: a platform that acts as a trusted intermediary between two people who do not know each other. they know or trust each other. While these platforms have done a fantastic job of creating a P2P economy, with a sophisticated content discovery and securities settlement layer, they also dictate all the rules of transactions and control all of their users’ data.

The Internet that we use today is primarily based on the idea of ​​the autonomous computer. The data is stored and managed centrally on the servers of trusted institutions. The data on these servers is protected by firewalls, and system administrators are required to manage these servers and their firewalls. Trying to manipulate the data on a server is similar to entering a house, where security is ensured by a fence and an alarm system.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button