Will AI build a more reliable future?
The massive amount of data (now called Big Data) shared every second alters our way of seeing and believing things and events.
The massive amount of data (now called Big Data) shared every second alters our way of seeing and believing things and events. We even "forced" media companies to struggle for their own credibility as news (or so called) appear immediately on the social media rather than news portals. It has radically changed the way ideas and thoughts spread across masses, altering sensibly the way we observe reality. This last part can be scary if you consider that social media news can be "crafted", making people believe things that did not happen. We could call it Social Engineering brought to the "n-th" level, or even social hacking.
I could mention the latest social hacking where this guy posted a video where he builds an earphone jack in the new iPhone 7 by drilling a hole in it. As much as the video is visibly fake (and hilarious) the idea of over 12 million views (as of today) gives an idea of how much a viral video can affect the society. Worldwide.
In the era of information, ignorance is a choice
The true problem about Big Data as source of knowledge is the lack of credibility of the "sources".
Take, for example, Wikipedia, the "free encyclopedia". I've seen materials given in local schools that were nothing else but prints from the website pages. But is Wikipedia a reliable source of knowledge? It is not my thought, neither I am trying to say the contrary, because it is Wikipedia's own struggle. Since all its data is crowd sourced, the possibility that information in it are crafted is a common reality. Hence they started developing an "intelligent" system that allows the system to verify if articles (Ref. Artificial intelligence service gives Wikipedians 'X-ray specs' to see through bad edits) could be damaging and therefore warning authors about potentially dangerous contents within minutes.
Another big issue I have already faced a few times, which fascinates me a lot, is public security. I've seen a few systems that can - supposedly - predict crimes by analyzing Big Data sources, like social networks. But within a social model where data (and I mean: massive) data can be crafted by "hacking" social behavior, how could possibly a crime predictive system be reliable?
Apparently, whether we might like it or not, the only possible answer is Artificial Intelligence (in a wide concept of the term).
As I already mentioned in my previous post, human beings are driven by too many factors, one of the biggest being emotions. Therefore, a picture showing violence with a racist text as caption could bring a reaction to something that never happened. An unreliable article written in Wikipedia printed by a lazy teacher who did not study enough, given to students could damage their knowledge by "implanting" wrong concepts about a topic. The choice of a Police Department to move police forces to an area where a system predicted a riot, based on untrusted social media information could bring massive damages. According to a The Guardian's article, the gunman who killed nine people in Munich last July, lured out his victims by posting on Facebook of "free McDonald's food".
The amount of data and information crossing the Internet has grown so much it is already out of human reach to even think of computing it. Billions of articles appear on the Internet every single day and unless the reader (and only the reader) have the time and ability to lookup if the news is true, he or she is prone to be deceived by something not true.
It comes natural to think that we "need" something (or Someone) to help us surviving this mess, something or Someone that would be able to evaluate the information, its source and determine if it is or not reliable or true. If an article in Wikipedia has proven basis that could be passed to students that will gain knowledge. Something or Someone that could lead a security organization to evaluate and determine the risk level at given time in some place.
And while people like Elon Musk, Bill Gates or Stephen Hawking have their concerns about AI and the impact it will have on humanity (we call it Singularity for a reason) it is hilarious to see how we really already lost control on the data spreading across the world, and how much we're struggling to take back control of it for the good of our lifestyle. By creating something even less prone to be controlled, in the untold hope to be able to live better.
Conclusion (with spoiler alert): an honorable mention for this post goes to Isaac Asimov's Foundation (1988 - Prelude to foundation) where the main character developed a tool that could predict human behavior centuries before it would even happen. The predictive power of the device was although limited only to social masses, i.e. it could not predict a single individual's behavior. Interestingly enough, if you read all the Foundation cycle you might find out the entire history of the human race has been driven by an AI system who, in the background, manipulated events and people across the centuries so humankind could survive its own behaviors.