The Web 3.0 concept was first described by Jason Calacanis and describes a new paradigm of web-site creation, content development and search engine optimization. But what is wrong with Web 2.0? Why do we need version 3.0? Just few years ago, there was big “hype” about 2.0, but what’s wrong now?
The main problems of today Web are:
- Congestion of network resources from repeatedly duplicated content in the absence of a reliable mechanism for searching for the original source.
- Dispersal and disconnection makes it impossible for topic analysis
- Differences in presentation depends on publishers
- Weak coupling of search results with the core interests of users
- Low availability and weak classification of archived content ( for instance, social networks).
There are several core points with old problems. One of them is that main players are not owners of information. But unlike the real world, spaces in the virtual world are not limited, which is why the number of places offering information has exceeded the number of units of unique content. Web 2.0 partially corrected the situation. Each user received his personal space (for example, an account on a social network) and the freedom to configure it. But the problem with uniqueness of content was only exacerbated by the “copy-paste” technology that had an even greater degree of duplication of information.
At the surface, Web 3.0 should resolve these issues and proposes a transition from a site-centered web content to a semantic-centered network — from a web page with arbitrarily configured content to a network of unique objects and combined with a finite number of clustered interconnected documents. From the technical side, there are many online services that provide a full range of tools for making, editing, searching and displaying any type of content that simultaneously provides a classification and categorization of user activity.
According to requirements, Web 3.0’s main responsibility includes:
- Track as well as index content and create smart indexing systems that should take into account not only uniqueness of content, but also the time of publishing and its original source.
- Track not only content but history of resource behavior (i.e. count and quality of unique materials, timeline of their production, material duplication).
- Provide long-term, reliable storage with access for actions noted above.
The first steps in realizing these issues were made by Google in its search algorithm. There was even a service to track your content and notify the owner about duplicates. But this service was payable and didn’t obtain wide recognition. Google competitors such as Bing and others also include this type of checking with different weight into search result sorting. Search systems preferred to concentrate on tracking user behavior and preferences to show more relevant results instead focusing on content and source history. It means duplicate content could rank in search engines even with unrelated content filled with ads. The other step was tracking user clicks and including a countsearch result algorithms. But website owners began to buy user clicks.
There were a lot of other attempts from indexing and search systems, but the main idea lies in a necessity of complex non-fragmentary content and resource-history tracking systems with an open API. These systems should be reliable and provide integrity and availability of stored records, and blockchain could server these goals.
The first solution could be , let’s say, a “watch-tower” embedded or connected into search indexes and tracked content as well as its characteristics. Information and metadata of each content portion (i.e. document) should be extracted, cleared and normalized before storing. In addition, technical data should be tracked including captured time, count of changes compared to previous versions, metrics of the document structure, etc. There are a lot of fields and data that could be used for rating documents in search results, but each search engine will do it its own way. We think that future Web 3.0 specifications should declare fields that should be mandatory. Also if search systems prefer to establish their own “watch-towers” with dedicated and unique functions of tracking, developers should think about development of core blockchain based protocols for information sharing between such “towers.”
Installation and implementation of “watch towers” requires not only object-oriented (i.e. document or content-oriented) tracking but also IAM solutions with full integration into blockсhain based infrastructure. It means that we should track not only the content, its changes and history of resources, but also user behavior in content production to set up author authority tracking. This is one of our main goals established with Remme. With our masternodes implementation, we’ve taken a huge step forward to achieve Web 3.0 goals.
With Remme, you will see a semantic-centered network with decentralized indexes and tracked blockchain based storage with free and non-limited access for different systems. This network supports the dream: “Free information for everyone.”