Privatisation of knowledge: why Google Bard and ChatGPT are a problem for us
Opinion

Privatisation of knowledge: why Google Bard and ChatGPT are a problem for us

Oliver Herren
23/8/2023
Translation: Patrik Stainbrook

In July, Google changed its privacy policy to allow its in-house AI Bard to use the entire public Internet for training. That’s a problem.

The Internet thrives on the exchange of information. People share their knowledge on blogs, social media, YouTube, forums and Wikipedia, just to name a few. And if others want to access this information, they have to search for it on the respective websites.

But what if that’s no longer necessary? What if searching and browsing becomes obsolete through ChatGPT or Google Bard? After all, these tools simply take information and make it available to you directly. On the one hand, this has its advantages. You save time and effort. But in the long run, you’ll pay a high price for your comfort.

After all, what happens when no one visits Wikipedia any more because users have become accustomed to AI tools? Users, donors and volunteer authors on Wikipedia will disappear. Information will become outdated, new pages won’t be added. And available, public knowledge becomes privatised.

Little incentive to share new knowledge

Providers such as Open AI and Google are already taking advantage of the general knowledge available on the Internet and building paid services. They profit from what the general public has created, often paid for by the public, universities or foundations such as Wikipedia.

When we extrapolate the success of AI services into the future, many issues come to light. One of them being that less and less information could be published due to a lack of visibility for creators. Whether it’s recognition or money through advertising or subscriptions – it’ll all evaporate. And then? The creation of new information will come to a standstill. The system breaks down. The knowledge that until now has been put online publicly will be solely held by corporations. And this knowledge can be dosed, concealed or adjusted as desired.

How is an AI different from a human who knows the entire Internet?

But it can’t be that bad, right? Don’t these new tools just do what humans do but more efficiently? Only partly. A human being who wants to educate themselves on a topic and reads and looks at everything publicly available to them isn’t comparable to an AI. The difference is, this person will never be able to aggregate all the information. And if they could, then only one person would have access to the knowledge – not everyone who uses a tool. A tool owned by a corporation.

A topic that affects us all

The privatisation of knowledge is a complex issue that affects us all. It’s time to think about what this means for our future. The difference between reading publicly available and privatised information is more than just a matter of access. It’s a central question moving into the future.

How do you see this development?

203 people like this article


User Avatar
User Avatar

Cool: Creating interfaces between the real world and the world of pure information. Not cool: Driving by car to the mall to shop. My life happens online, the information age is where I feel at home.


These articles might also interest you

Comments

Avatar