AI Snake Oil: Dawn Project Exposes Big Tech's Flaws in Critical Infrastructure

AI Snake Oil: Dawn Project Exposes Big Tech's Flaws in Critical Infrastructure

Safety advocacy group, The Dawn Project, has escalated its campaign to expose the shortcomings of artificial intelligence (AI) systems and argue against their deployment in safety-critical applications. The group warns of potential "cataclysmic cyber attacks" arising from the use of vulnerable, error-prone commercial software in critical sectors like water management and the power grid.

The Dawn Project highlights the glaring inconsistencies within AI systems, citing examples of chatbots failing to provide basic information or generating incorrect responses due to flawed training data. Despite these acknowledged weaknesses, AI developers are pushing for the adoption of AI in critical infrastructure, a move The Dawn Project deems premature and potentially catastrophic.

"When AI systems are used in major technologies with potentially devastating consequences, it is imperative that they never fail during a safety-critical incident," the group asserts. "The dangers of deploying these technologies on a wider scale for weapons or heavy machinery, including cars, cannot be ignored or underestimated."

As part of its campaign, which has included prominent advertisements in The Wall Street Journal and The New York Times, Dan O'Dowd, software entrepreneur and founder of The Dawn Project, has publicly denounced Tesla's AI-powered Full Self-Driving software. He contends that despite over a decade of development, the technology remains unreliable, citing instances where it illegally overtakes stopped school buses and endangers pedestrians.

The US National Highway Traffic Safety Administration (NHTSA) has previously investigated potential safety defects within Tesla's Autopilot system. Their investigation, encompassing crash analysis, human factors, vehicle evaluations, and driver engagement assessments, revealed at least 13 crashes resulting in fatalities and numerous serious injuries, seemingly caused by driver misuse of the system.

On its website, The Dawn Project draws a parallel between AI proponents and gamblers, both relying on flawed "systems" while seeking immense financial resources and power.

"They claim to be the masters of the technology that will usher humanity into a paradise where everyone gets everything they want and no one has to work, or maybe it will exterminate everyone," the group states. "They are simply modern-day travelling preachers peddling quackery."

Their advertisements depict Microsoft, Google, and OpenAI as peddlers of "AI snake oil". The group points to Microsoft's $13 billion investment in OpenAI's ChatGPT, yet highlights the chatbot's failure to accurately list US states ending in 'Y', providing three incorrect answers out of five.

"AI researchers acknowledge that 'hallucinations' are a fundamental and unresolvable weakness of large language models and admit they cannot explain why AIs make certain bad decisions," says O'Dowd. "We must demand consistency and reliability in our safety-critical systems. We must reject hallucinating AIs and apply the same rigorous standards of software security we demand for nuclear security to the critical infrastructure upon which society and millions of lives depend."

The Dawn Project warns against the use of commercial-grade software in applications that could potentially harm or kill people. "These systems have already been proven to be unfit for purpose," they state. "How can technology giants claim they are ready for deployment in potentially deadly infrastructure solutions?"

The Dawn Project's campaign raises serious concerns about the unchecked development and deployment of AI, urging for a more cautious approach that prioritizes safety and reliability before introducing AI into critical systems affecting public safety and wellbeing.