When AI ≠Nice, stop saying oops
Unpacking the unintended consequences of AI and the lack of mature process in this industry
#AI is a blessing until it isn't. Welcome to the realm of #AI ethics, where the recurring theme is unintended consequences.
Too frequently, our industry has uttered "oops." Numerous companies have made headlines due to these unintended consequences. Search for "racist AI" or "sexist AI" on Google or even GPT4, and you'll discover a plethora of examples.
"All models are wrong, some are useful."
- George Box
Models fail in production every day, straying from their training sets. This is precisely why those employing #AI in practical applications have alerts and systems in place to swiftly raise the alarm. This is the raison d'être for #MLOps, and why some companies surpass the 1,000 models deployed milestone—they have alarms. These alarms were, in fact, inspired by the manufacturing and industrial sectors.
If chemical engineers from the world's top oil and gas companies were spearheading #AI efforts, perhaps I'd feel more at ease.
Or if the ingenious biochemists involved in biopharmaceutical research, designing experiments that culminate in large stainless steel bioreactors containing 10,000 liters of genetically modified Chinese Hamster Ovary (CHO) cells, were in charge, I'd feel more secure.
Or if the brilliant chip designers creating next-generation chip architectures for leading chip manufacturers and global foundries like #TSMC and #globalfoundries were in control, I'd feel more comfortable.
Or if cardiothoracic surgeons who perform open-heart surgery, aeronautical engineers at Lockheed Martin designing cutting-edge fighter jets, or the exceptional rocket engineers at SpaceX were in charge, I'd feel reassured.
Or if nuclear engineers responsible for designing and maintaining the complex safety systems of nuclear power plants, ensuring the delicate balance between power generation and radiation containment, were leading the way, I'd feel more protected.
Or if air traffic controllers, who expertly navigate the crowded skies, coordinating thousands of flights each day while making split-second decisions to prevent mid-air collisions and ensure passenger safety, were in command, I'd feel more reassured.
Or if structural engineers, who meticulously calculate the forces and stresses that buildings and bridges must withstand, designing resilient structures to endure earthquakes, hurricanes, and other natural disasters, were at the helm, I'd feel more secure.
Or if marine biologists researching the delicate ecosystems of coral reefs, striving to develop sustainable solutions to protect these vital underwater habitats from pollution and climate change, were overseeing the process, I'd feel more confident.
Or if forensic scientists in criminal investigations, utilizing cutting-edge technology to meticulously analyze DNA evidence, fingerprints, and other crucial clues to bring criminals to justice and exonerate the innocent, were in charge, I'd feel more certain.
I would feel more confident because ALL of these professionals come from industries where "oops" has matured over half a century and is no longer acceptable. Regularly conducting potential problem analyses? Absolutely. Doing everything possible to avoid failure? The #AI industry has been the wild-west for too long. It's a rarity to use more adult process, preferring to run first and assess later. This is exactly why this recent push with GPT4 should concern many of us, the industry hasn’t earned the trust compared to our professional peers from other industries, we haven’t had the best track record as an industry.
This post was inspired by using #MidJourney to create creatures representing a brand. I'm not requesting anything terrifying, nor am I leading the prompt. I adore the one on the left but despise the one on the right.
The one on the right exemplifies an unintended consequence where a peculiar bias linked to the fast-food industry (or anything else, for that matter) unexpectedly emerges from the model, catching you off guard. There are other biases that are hiding in these models that aren’t well understood yet. For example if I ask these systems to produce creatures the exemplify the companies for Snowflake, Nvidia, Dataiku, or NetApp I get these:
If I then ask it to generate creature logos for oil and gas companies I get darker images like this
Ultimately, the bias is from us, we are the ones that train and bias the data. Maybe this post is showing when it comes to some companies in the fast-food industry, oil and gas, or other industry that has seen negative sentiment (e.g. oil spills, health concerns, etc..) that negative sentiment bias comes rolling out of the model and surprises us.
Ultimately, I wish the AI industry would borrow more from the industrial, high consequence industries. Borrow their process, borrow their level of caution.
Appreciate feedback and comments as always. If you a haven’t yet please give my AI new podcast a listen:
When AI ≠Nice, stop saying oops
Good point about maturity vs. oops rate. Many AI maturity models focus more on adoption rate than oops rate. The Capability Maturity Model that software engineering adopted 40 years ago is a good starting point for process maturity modeling.