Machine Intelligence Insurance
Published On: 8/9/24, 19:38
Author: Julian Bleecker
Contributor: Julian Bleecker
Machine Intelligence Insurance
How do you mitigate against misleading intelligences?
Protect yourself from false, misleading, hallucinating intelligences with machine intelligence insurance.
Nationwide Recombinant provides you with the peace of mind you need to operate without concern for out-of-band agentic actions either from your CIs — or a third-party. Never worry again that non-refactored agents operating against outdated interfaces, APIs, or websockets or poorly articulated pre-prompts or multishots can disrupt what was otherwise a well-organized workplan or vacation experience.
Allstate Anticipatory Agentic Services keeps you in the know with our 24/7 active loop monitoring of all your agentics and their langrafs. Errant persistent messaging, unexpected prompt reflow, overloaded inboards? We alert you before these happen. Unecessary loops and edge graphs are all captured and automatically refactored. With Allstate Anticipatory, you'll never have to worry again about misbehaving or miscreant agentics activities, even with third or fourth party installations. Contact your nearest Allstate Anticipatory and learn about how you can embed the world's #1 protective anticipatory plan today.
No Additional Details.
..but your Fortune letter opposing SB-1047 seems off the mark to me, in part because it doesn’t fit my understanding of what SB-1047 actually calls for, and in part because it does too little by way of offering a legitimate alternative.
You claim that “SB-1047 will unduly punish developers and stifle innovation. In the event of misuse of an AI model, SB-1047 holds liable the party responsible and the original developer of that model” and in this connection that “It is impossible for each AI developer—particularly budding coders and entrepreneurs—to predict every possible use of their model.” But SB-1047 does not require predicting every use.
Rather, it focuses on specific, serious “critical harms” such as mass casualties, weapons of mass destruction, large-scale cyberattacks, and AI models autonomously committing serious felonies. Those seem reasonable to me, and I don’t understand what would justify an exemption there. Even then developers are required only to implement “reasonable safeguards” against these severe risks—not to fully mitigate them. Furthermore, much of what would be required is already something companies committed to voluntarily, in discussions at the White House and in Seoul. None of this is really conveyed in your Fortune essay.
You argue that SB-1047 is potentially “stifling innovation” on the assertion that the bill could harm open-source AI development because of “kill switch” requirements. But as I understand the latest version of the bill, the “kill switch” requirement doesn’t apply to open-source models once they are out of the original developer’s control.
You claim that the bill will hurt academia and “little tech” and put others at a disadvantage to tech giants. But you don’t make entirely clear that much of the bill’s requirements are limited to models with training runs of $100 million+. Companies that can afford that, presumably valued in the billions, are not exactly “little-tech”, and should be able to handle what is required.
You say that you favor AI governance, but don’t make any positive, concrete suggestion for how to address risks such as mass casualties, weapons of mass destruction, large-scale cyberattacks, and AI models autonomously committing serious felonies. With no other serious proposal on offer, I personally favor SB-1047, though I would welcome discussion of alternatives.
Lastly, asking for standards (and a degree of care) is not unique to AI; it’s common across many industries to ask that companies evaluate the safety of their products according to set standards: just look at the pharmaceutical industry, or aviation, automobiles, etc. As Bengio, Russell, Hinton, and Lessig observed, “There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers.”
Your letter doesn’t really grapple with this.
I’m sure that your argument against SB-1047 was made in good faith, and with the best of intentions. But as noted above there seem to be some inaccuracies in your essay, and I wonder if you would be willing to reconsider in light of these clarifications.
Best regards,
Gary
Professor Emeritus, New York University
Founder and CEO, Geometric Intelligence (acquired by Uber)**