Nowadays just about everyone understands the importance of data. And when it comes to AI, the more data the better. Data can be an incredibly valuable and powerful tool and it’s growing at an exponential rate. In just the past two years, 90% of the world’s data was created and companies are needing to store more data than ever before. Questions about privacy, security, and data governance including access are being asked by organizations.
For many years the traditional formula for service on the internet was, give us your data, and we will give you service for free. However as data is becoming more valuable, will this formula stay, or be replaced? Should these companies own our data? Should we get paid for giving away data? Should our data be stored on more secure blockchains? Siraj Raval, the creator of the School of AI and YouTube celebrity, addresses all these topics in a recent interview on the AI Today podcast. (Disclosure: I am a principal analyst at Cognilytica and co-host of the AI Today podcast)
Who manages your data?
As more people become aware of how valuable their information is to companies, and how easily it can be used and sold, people are beginning to wonder just how easily they should give over their information. These kinds of questions were asked during the entire Facebook debacle with Cambridge Analytica. Siraj explains the increasing importance of letting the public know the specifics of what data companies are using, what measures will be taken to secure that data, and how the data will be used.
Siraj goes on to explain that blockchain can help address many of these data-related trust and safety-related issues. According to Siraj, blockchain, which provides a secure, durable, distributed ledger, is known for its resilience and secure nature. The public is most familiar with the use of blockchain in cryptocurrencies such as Bitcoin, but it is proving to have greater value in non-monetary realms. According to Siraj, providing data onto a secure blockchain paired with AI can become a tool to help humanity .When used correctly, blockchain and AI could to provide greater trust on data-based systems, from facial recognition to voting and other applications where a distributed ledger can assist to verify results. It is also increasingly finding its way into applications of many different types, giving users a way to verify every action in the network so that people could view errors more easily.
Blockchain and AI are also working together in self-driving vehicle applications. A fleet of fully autonomous taxis can work together to communicate about locations, routes, and where each car is relative to others on the road in real time. And since actions and movements would be documented on the blockchain, this would also help provide transparency into the autonomous vehicle’s actions and routes. Siraj compares the blockchain network to a nervous system, always communicating with each other to learn from mistakes.
Are AI and Blockchain just two buzzwords?
AI, and blockchain for that matter, have become two very popular buzzwords. Many companies are throwing these words into their marketing to add to the excitement of their product. But do all these products really have AI? Do they really want to use a blockchain to store and manage data? Is talk of AGI and doomsday scenarios overblown? Could these scary theories be more possible than we think? This fear brings about some great thoughts from Siraj urging us to be more cautious, and more careful about the AI systems we create.
For many people in the industry, they know all too well the fears of an AI Winter. The fears of overpromising and under delivering can quickly lead to a decay of interest in AI. The media likes to play up the bad things around AI. For example, one incident in 2017 saw two AI facebook chatbots be shut down after creating their own language. News outlets ate the story up, and exaggerated that “robots will take over the world.” This story was overblown and the real facts of the situation were far less interesting than the media led people to believe. Facebook only shut them down because they did not operate as expected or wanted, and the explanation for the “new”language was that that is what AI does, look for a pattern. These two AI fed off each other, which just dumb-downed the AI until they were speaking gibberish. By educating the public, and giving valuable, and honest news these scenarios seem far less doomsday than they can be spun to be.