As AI adoption accelerates, the consequences—intended and not—are becoming harder to ignore. From biased algorithms to opaque decision-making and chatbot misinformation, companies are increasingly exposed to legal, reputational, and ethical risks. And with the rollback of federal regulation, many are navigating this landscape with fewer guardrails. But fewer guardrails doesn’t mean fewer consequences—only that the burden of responsibility shifts more squarely onto the businesses deploying these systems. Legal, financial, and reputational risks haven’t disappeared; they’ve just moved upstream.
Responsibility in AI is murky. The question of who is accountable when things go wrong is complicated by the number of stakeholders involved—developers, deployers, end-users, and platform providers. The “tool vs. agent” debate continues to blur lines, and the opacity of many systems, especially deep learning models (like those often used in LLMs), makes it harder to determine fault.
Recent legal cases underscore this complexity. Air Canada denied liability when its chatbot gave a passenger incorrect information. Saferent, a tenant screening tool, was found to disadvantage minority applicants but claimed it merely made recommendations and should not be held responsible for the final decision. Character.AI, facing lawsuits linked to suicide, argued that its chatbot output should be protected under the First Amendment. Meanwhile, Meta continues to assert that it is a platform, not a publisher, and therefore not accountable for user-generated harm.
But complexity does not grant absolution.
Matthew Da Silva synthesizes work on “responsibility gaps,” noting they arise when no single party seems to meet the standard conditions for moral responsibility—causal control, relevant knowledge, and reasonable alternatives—across the full chain of harm. But those conditions don’t disappear simply because multiple actors are involved. In enterprise AI, the deployer typically has decisive control over goals, guardrails, monitoring, and escalation paths. It is not evident that there are responsibility gaps at play, even in the most fragmented corporate operational architecture.
Moreover, a defense of “ignorance”, which many companies have turned to in the past, has run out of road. “The model learned something we didn’t program,” “We didn’t know what the model was doing,” or a general inability to explain why a model produced a particular output cannot be used to deflect or dilute blame. The idea of epistemic responsibility is growing.
Donald Rumsfeld’s 2002 “known knowns / known unknowns / unknown unknowns” riff was pilloried at the time, yet its core distinction tracks with how philosophers parse ignorance. Torsten Wilholt separates conscious ignorance—what we know we don’t know—from opaque ignorance—the space of what we can’t yet imagine. Conscious ignorance is finite (though can grow) and, importantly, is progress-generating: we can map blind spots, design tests, and invest in learning. Opaque ignorance will always exist, but it doesn’t excuse failing to address what we already know we don’t know.
And with AI, we know a lot about what we don’t know, and the harms that can come out of AI models. We know black-box models can’t reliably explain specific decisions. We know AI systems struggle when the world looks different from their training data (known as “distribution shift”). We know training data encodes historical bias. We know chatbots confidently fabricate. We know automation changes user behavior in ways that create new risks. If these “known unknowns” are on the table, then the duty to anticipate, instrument, monitor, and respond is, too.
Daryl Koehn, a philosopher and business ethicist, distinguishes between foreseeable and unforeseeable consequences in the duty of care. Businesses are clearly responsible for the consequences a reasonable person could anticipate—especially where known classes of harm (discrimination, fabrication, over-reliance, privacy leakage) are widely documented. For genuinely unforeseeable outcomes, moral judgment should be tempered—but responsibility does not vanish. It shifts into duties of response: rapid remediation, transparent communication, fair compensation, and structural fixes to reduce recurrence. Many of the public blowups that erode trust do so not only because harm occurred, but because the organizational response signaled denial, deflection, or indifference.
The cases cited above all resulted in courts ultimately determining the business in question to be responsible in some manner or other. Air Canada was found liable for negligent misrepresentation. Saferent paid a significant settlement and committed to change. A judge rejected Character.AI’s free speech defense.
Lawmakers will continue to adapt existing legal frameworks—often ill-equipped for the complexities of modern AI—to address novel, unprecedented legal cases. And even if regulation lags, litigation—and consumer backlash—will continue to fill the void. Consumers don’t differentiate between corporate divisions or software vendors. They see the brand. Responsibility, even if outsourced, still lands at your door.
Sociologist Robert Merton’s classic “law of unintended consequences” framework is useful for businesses to begin planning out risk and response. He identified five common reasons we’re often caught off guard: lack of knowledge, mistakes and errors, short-term thinking, cultural values that limit our perspective, and predictions that end up changing the behavior they’re meant to anticipate. All of these are visible in modern AI practice. Teams rushing to meet deadlines often skip thorough testing. Cultural assumptions quietly shape how data is labeled or how prompts are written. And when certain risks are made public, they can unintentionally influence how bad actors behave. These factors can’t be eliminated entirely—they’re part of how people and systems operate—but they can be acknowledged, tracked, and mitigated through more intentional design and oversight.
What does this kind of responsible AI practice look like in real life? It starts with AI literacy at the leadership and product-owner level—not to turn everyone into experts, but so they can ask the right questions: Where does our training data come from? Who might it leave out? How do we know this system will work in the real world? Who’s responsible if it doesn’t? Teams should build a culture of asking “what if” before launch—What could go wrong? Who would be affected first? What early warning signs should we look for? When do we pull the plug if things go sideways? Testing should go beyond average performance to include edge cases, bias, resilience to unexpected inputs, and how the system behaves in high-risk scenarios. There should be backup plans and human oversight built in when the stakes are high. And just as importantly, launch isn’t the finish line—it’s the start of risk management. AI systems should be monitored closely after deployment, with proper resources for spotting problems, responding to incidents, and learning from what goes wrong—just like any other critical system.
Federal regulations may have lost momentum, but that doesn’t absolve companies from responsibility. In fact, the burden now falls even more clearly on businesses themselves—not only to reduce harm, but to protect their own bottom line. The legal, financial, and reputational costs of neglecting AI risk are rising, even in the absence of formal rules. Companies that proactively build in AI oversight, transparency, and ethical safeguards won’t just stay ahead of regulation—they’ll earn consumer trust and long-term resilience.
The post Black Boxes, Clear Duties: Owning AI Risk When the Guardrails Are Gone first appeared on Blog of the APA.