In short Weeks after Cruise launched its self-driving, driverless taxis to the public in San Francisco, numerous vehicles mysteriously piled up and blocked several lanes of traffic entering downtown.
Apparently a bunch of driverless Cruise vehicles were "stuck" blocking an intersection for "a couple of hours" the other night, according to a redditor who posted these images. No word on what the issue might have been. https://t.co/EenuqbjTsD pic.twitter.com/WkPQCF1SjL
— E.W. Niedermeyer (@Tweetermeyer) June 30, 2022
At least seven cars were spotted circling a location in the Civic Center area of San Franscico overnight. Driverless vehicles had stopped for some reason, preventing nearby traffic from moving. It is not known why or how these cars apparently suffered a technical problem. Some of those questions were raised in an anonymous letter sent to the California Public Utilities Commission, which claimed Cruise was looking to launch his commercial robotaxi service too soon, The Wall Street Journal reported.
Head of Tesla’s leading AI Autopilot leaves
Andrej Karpathy, senior director of AI and computer vision expert helping Tesla develop self-driving cars, announced he was leaving after working at the company for five years.
It's been a great pleasure to help Tesla towards its goals over the last 5 years and a difficult decision to part ways. In that time, Autopilot graduated from lane keeping to city streets and I look forward to seeing the exceptionally strong Autopilot team continue that momentum.
— Andrej Karpathy (@karpathy) July 13, 2022
There were rumors that Karpathy would not return after saying he was taking a four-month sabbatical earlier this year in March, according to Elektrek. Karpathy was hired to lead Tesla’s AI and self-driving efforts in 2017 and left his previous position as a research scientist at OpenAI.
Boss Elon Musk thanked him for his service via a message on Twitter. Karpathy leaves at a risky time for the company. Tesla’s share price fell amid deteriorating market conditions and the company closed one of its offices in San Mateo. It also faces intense scrutiny that could lead the National Highway Traffic Safety Administration to issue a recall for hundreds of thousands of its cars.
Karpathy said he wasn’t sure what he would do next, but would focus on “technical work in AI, open source and education.”
AI heading to the 2022 World Cup
AI-powered cameras will be deployed to help referees decide if football players are offside at the upcoming 2022 World Cup tournament to be held in Qatar from November.
The technology involves placing a sensor inside the soccer ball and a series of cameras under the roof of the stadiums. The sensor will monitor its position on the football field and the images from the cameras will be fed into machine learning algorithms capable of tracking players’ locations.
When the software detects that a player is out of play, an alert is sent to people in a nearby control room. The information will be transmitted to the referee, who will then decide whether to call the infraction or not.
Pierluigi Collina, chairman of FIFA’s referees committee, said the automated system will allow referees to make “faster and more accurate decisions”, and said humans, not robots, were always in charge , according to The Verge. Gianni Infantino, the current FIFA president, says the technology has been in development for three years and only takes seconds to signal offside.
AI research ethics review, yes or no?
Academic conferences are asking AI researchers to examine in technical papers how their research could potentially harm society, and not everyone is happy.
As AI and machine learning technology continues to advance in academia, it is inevitable that some of these techniques will eventually be deployed in real life. Applications often reveal that algorithms can be used for better and for worse. Improved computer vision algorithms, for example, are helping to develop self-driving cars, but are also being used for surveillance purposes.
AI-focused conferences like Neural Information Processing Systems and now the Computer Vision and Pattern Recognition conference are asking researchers to write paragraphs examining if and how their research might be harmful. But not everyone supports the move, Protocol reported. Some researchers believe this is beyond the scope of their work or could impact research freedom, while others have acknowledged that their work could be abused in certain use cases.
“We are still at a point in AI ethics where it is very difficult for us to properly assess and mitigate ethical issues without the partnership of people closely involved in the development of this technology,” said Alice Xiang, who heads Sony Group’s AI Ethics Office. and served as General Co-Chair of the ACM Conference on Fairness, Accountability and Transparency.
Clearview fined by Greek authorities
Controversial facial recognition startup Clearview has been fined 20 million euros for breaching privacy laws by the Greek Data Protection Authority (HDPA).
The company has been accused of breaking current EU GDPR rules by failing to obtain explicit consent to use individuals’ personal data when it scraped billions of photographs posted on the internet. These images were used to build Clearview’s database for its face-matching algorithms.
Given an image, the company’s software searches its database for potential image matches to reveal someone’s identity by linking to their social media profiles, for example.
The fine was the highest amount ever ordered by the HDPA, according to The Record. A spokesperson for Clearview claimed that it “does not undertake any activity which would otherwise mean that it is subject to the GDPR”. ®