AI can go wrong many times. When people come to know about it they find it funny but it can lower the reputation o that company. Below mentioned are viral moments when AI went wrong. 

Chinese billionaire’s face identified as a Jaywalker

Traffic police in major Chinese cities are using AI to deal with jaywalking. They deploy smart cameras using face recognition techniques at intersections to detect and identify jaywalkers, whose partially obscured names and faces then show abreast of a public monitor.

The AI system within the southern port city of Ningbo however recently embarrassed itself when it falsely “recognized” a photograph of Chinese billionaire Mingzhu Dong on a billboard on a passing bus as a jaywalker. the error went viral on Chinese social media and Ningbo police apologized. Dong was unfazed, posting on Weibo: “This may be a trivial matter. Safe travel is more important.”

Uber self-driving car kills a pedestrian

In the first known autonomous vehicle-related pedestrian death on a public road, an Uber self-driving SUV struck and killed a female pedestrian. The Uber vehicle was in autonomous mode, with a person’s safety driver at the wheel.

So what happened? Uber discovered that its self-driving software decided to not take any actions after the car’s sensors detected the pedestrian. Uber’s autonomous mode disables Volvo’s factory-installed automatic emergency braking system, consistent with the US National Transportation Safety Board preliminary report on the accident.

In the wake of the tragedy, Uber suspended self-driving testing in North American cities, and Nvidia and Toyota also stopped their self-driving road tests within the US. Eight months after the accident Uber announced plans to resume self-driving road tests in Pittsburgh, although the company’s self-driving future remains uncertain.

IBM Watson comes up short in healthcare

IBM has been investigating Watson’s AI capabilities across a broad range of applications and processes, including healthcare. In 2013 IBM originated Watson’s first practical application for cancer treatment reference, and hence the company has secured a variety of key partnerships with hospitals and research centres over the past five years. But Watson AI Health has not impressed doctors. Some complained it gave wrong recommendations on cancer treatments that would cause severe and even fatal consequences.

After spending years on the project without significant advancements, IBM is reportedly downsizing Watson Health and shedding quite half the division’s staff.

Amazon AI recruiting tool is gender-biased

Amazon HR reportedly used an AI-enabled recruiting software between 2014 and 2017 to assist review resumes and make recommendations. The software was however found to be more favourable to male applicants because its model was trained on resumes submitted to Amazon over the past decade when more male candidates were hired.

The software reportedly downgraded resumes that contain the word “women” or implied the applicant was female, for instance, because that they had attended a women’s college. Amazon has since abandoned the software.

Google Photo confuses skier and mountain

Google Photos includes a comparatively unknown AI feature which will automatically detect images with equivalent backgrounds/scenes and offer to merge them into one panoramic picture. In January Reddit User “MalletsDarker” posted three photos taken at a ski resort: two were landscapes, the opposite shot of his friend. When Google Photos mixed the three a weird stuff happened, as his friend’s head was rendered as a peak-like giant gazing out from the forest.

The photo was funny which received 202k upvotes. Social media hailed the Google algorithm’s smart blending of the pictures while mocking its stupidity for missing compositional basics.

AI World Cup 2018 predictions most wrong

The World Cup 2018 was the highest sporting event of the year, and AI researchers at Goldman Sachs, Perm State National Research University, German Technische University of Dortmund, Electronic Arts, ran machine learning models to foretell consequences for the multi-stage competition. Most, however, were totally wrong, with only EA — which ran its simulations using new ratings for its computer game FIFA 18 — correctly favouring winner France. The EA game engine is backed by numerous machine learning techniques designed to form player performance as realistic as possible.

The Robot Beats ‘I’m not a robot’ CAPTCHA

You all know this ‘I’m not a robot’ CAPTCHA. Where a Robot itself asks us to prove whether we are a robot or not. Ths video went viral  of robot beating  ‘I’m not a robot’ CAPTCHA