Fake news is the reason why the Internet is going in the wrong direction: whether if it’s coming from dodgy websites or from fake social media profiles, putting fake news online is still one of the problems of the current internet situation. When Mark Zuckerberg was interrogated at the Congress, he said that “What we need to stop the whole fake news market is an AI who’s capable of understanding what content is coming from relevant sources and what pieces of information are not reporting true facts”. The question right now is: are we already at a point in which AI can understand what is “true” and what is not? Let’s try and break it down in a simpler way.
How does it work?
The whole process will be based on ultrafast split testing: basically, what people usually do when it comes to researching if an article is legit, is a simple Google search and, if there are many articles reporting the same news in the same way, then it should be true. The AI will be required to do it instantly, looking at multiple results and following the websites’ Trust Flow listed by the Google’s algorithm. Once all the results are elaborated, the programmed AI will decide if the results are following relevant sources’ information, like the NY Times for example.
Once it’s found
If the AI finds out that there is a random viral article online with distorted information, it will be required to report it to relevant authorities. Currently, this is done manually and that’s why many different pieces of information are left behind while they keep on getting shares and comments. With an automated and fast AI, this would literally be a game changer.
During the last year, the amount of interest over the fake news subject has risen up consistently. The current fake news market is currently fluctuating between 70.8 thousand / 118 thousand clicks per month on Google Search and has over 251.2 thousand mentions on Twitter per month. In a recent poll, it was estimated that 64% of the people interviewed said that fake news caused “a great deal of confusion”.
Is it possible right now?
Let’s put it straight here: this kind of AI technology is already possible to create, manage and develop since it’s basically the same process that happens with mobile app development. The problems are revolving around Facebook, Twitter and Tumblr, since every single one of those has its own architecture and its reporting process when a piece of content is not following guidelines.
It’s a startup thing
There are not many companies who are trying to develop an AI who’s able to control and manage an insane amount of traffic, while constantly split testing with many other results to guarantee the veracity and the user experience when it comes to a news article. The brightest one, without any doubts, is Factmata: a London-based startup with Mark Cuban on the lead. Not only Factmata has attracted personalities like Craigslist’s founder Craig Newmark, but also the Twitter’s Co-Founder Biz Stone. They are currently listed in the top startups to watch out in 2018.
AI will be a central point in news and content administration in the near future, following Google and Facebook’s guidelines, terms and conditions, most likely. What we should expect in the next 5 years is a user-friendly interface that tells what and where are the dodgy articles.
Post a Comment
Programming is an area in which interest is not adequate for success. You need to be knowledgeable but you also need to know more and more about programming constantly. It's not enough just being involved in programming to succeed. Being a great developer doesn't just mean being an accomplished coderExplore
While most cybercrimes are carried out to generate profit for the cybercriminals, some cybercrimes are carried out against computers or devices directly to damage or disable them, while others use computers or networks to spread malware, illegal information, images or other materials.Explore