What does "Principled" AI mean to us
The rapid acceleration of AI technology has the potential to shift the way society engages with its daily work. This moment is high variance; who thrives and who struggles as a result is very much in the balance. At Thorian, we are dedicated to building AI applications that are net positive for society. For us, this means: 1) enhancing democracy and human rights; 2) strengthening the institutions tasked with promoting peace; prosperity; and stability for all humans; 3) choosing reliability and functionality over flash and marketing; and 4) working with customers that reflect our values.
As we navigate this moment, here are some of our observations about AI and society from 5+ years of working in the space that inform how we engage our customers and society at large.
AI, institutions, and public trust
- AI is morally neutral. Like any tool, it can be deployed for both good and evil. We need to deploy it for good
- There is a growing trust gap between the public and our institutions that threatens to further erode trust in government. When our institutions don't reflect the way the world actually works, when people's interactions with their government are the least efficient thing they do, they grow to despise the things that keep society together. Government efficiency through technology is a public trust and stability issue.
- Privacy concerns are often overstated relative to the actual understanding people have of privacy and security. Security language is frequently used imprecisely, and sometimes defensively, to protect old systems and habits rather than users, even when those systems are less secure than alternatives. The discussion is often driven less by technical reality than by institutional fear.
- “AI governance” without a deployment path is often just organized hesitation
Organizational change, adoption, and inertia
- Most organizations are not blocked by model capability; they’re blocked by poor data practices; fearful policies; a lack of understanding around what AI can do; and a creative process that is dampened by inertia.
- There is a gap between what people fear AI will do to them and what they're able to make AI do. Our job is to increase their confidence while giving them power to control AI on their terms.
- Lots of “solved” problems are not actually solved; they have just been normalized, deferred, or wrapped in better language. Organizations get used to inefficiency when it is familiar enough. Existing pain often survives because it is socially legible, not because it is acceptable.
- Most organizations do not have a technology problem first; they have a clarity problem, an ownership problem, and a process problem. AI exposes these weaknesses very quickly because it depends on defined inputs, outputs, exceptions, and decision paths. Without that structure, even strong tools degrade fast.
- Organizations will tolerate inefficiency if it is familiar and politically viable. It’s our job to create wins for our customers within a higher-tech future.
Data foundations, readiness, and consistency
- There are two parts to every AI implementation: data processing/context building and agentic operations. We can't have the outcomes-based piece without the data treatment piece.
- Every organization has a “data guy” in IT or in data teams. They are the bottleneck in their organizations. We need to give superpowers to people in these positions. IT and data teams are the fulcrum because every impossible request already lands on their desk.
- The real world is mostly made of patched-together systems, brittle processes, and institutional workarounds. “AI readiness” is one of the most overused buzzwords and one of the least meaningfully acted upon. Most teams are not standing on clean, structured, interoperable foundations. The good news is that this is fixable.
- Difficulty still scales non-linearly with data size, especially once the data is inconsistent, duplicated, or poorly structured. Small examples create false confidence. Repeated, consistent results at real scale are much harder than one-off wins.
- Most organizations are full of invisible translation labor between people, systems, and formats. That labor is expensive, but it is distributed thinly enough that nobody names it clearly. A lot of value is hidden in removing coordination, reconciliation, and repeated interpretation.
- The cost of inconsistency is underestimated because it is spread across too many people and too many days. One wrong label, one missing field, one different format does not look catastrophic in isolation, but together they create drag, delays, and distrust. Organizations often experience this as “things taking forever” without understanding the underlying cause.
Product design, UX, and user control
- The request to a data team should be “deliver the output,” not “send me a data pull.” The world is moving towards lower UX, more outcomes-based work with AI. UIs should contain the minimum amount of controls and inputs possible. Every additional field, option, and decision transfers cognitive load back to the user and weakens momentum. Good interfaces do not just reduce clicks; they reduce doubt.
- Control is part of the product: permissions, approvals, audit trails, and secure deployment are how trust gets built. Our customers need to feel in control of AI. They need to feel secure, that they won’t be put in a position where a mistake means they lose their jobs.
- The amount of complexity being added by software platforms can sometimes be greater than the amount of complexity being subtracted. The difference is often hidden inside interfaces, integrations, and maintenance. A lot of products present themselves as simplification while actually relocating the burden elsewhere. The user still pays for that complexity eventually.
- User independence ends at the first configuration decision they cannot make confidently. A product is not truly usable if it works only when the most technical person is present to interpret, repair, or authorize it. Reliance on internal specialists is often just disguised product failure.
- Integration of chatbots has become the default expression of AI, but better productivity gains are often found in function-bots that do work silently. Conversation is not always the best interface. In many cases, chat is just a transitional layer before the real value moves into background execution.
- A lot of software creates more surfaces to manage instead of less work to do. Dashboards, admin layers, and monitoring views often proliferate where action should have happened. If the system still depends on a human repeatedly translating intent into clicks, the work is not really finished.
- People say they want automation, but often what they actually want is relief without loss of control. Trust is built less by raw intelligence than by predictability, recoverability, and clear boundaries around what the system is allowed to do. The most important product behavior often appears at the point of uncertainty or failure, not success.
Operational reliability, workflow completion, and leverage
- Repeated, consistent results are much more difficult than one-offs, and the market routinely underestimates that difference. A demo can prove that something is possible, but it says very little about whether it can survive real operational conditions. Reliability is usually the harder product than intelligence but is worth aiming for.
- A system is not mature just because it produces answers. The harder problem is producing answers that can survive contact with real workflows, exceptions, deadlines, and accountability structures. Accuracy in practice includes timing, consistency, context, and handoff quality.
- A product becomes real when it reduces coordination, not just when it produces an impressive result. Saving time as a goal is often too abstract on its own; what people need to feel immediately is fewer dependencies, fewer follow-ups, and less waiting on other people. Relief is often more persuasive than capability.
- A lot of institutional knowledge is just operational fragility with a flattering name. When critical work lives in one person’s habits, inbox, or undocumented judgment, the organization is not resilient; it is exposed. The system is only as real as its ability to function without heroics.
- We are not interested in making AI feel magical. We are interested in making difficult work feel governable, repeatable, and less dependent on memory, urgency, or individual heroics. The point is not spectacle; the point is usable leverage.
- Our advantage is not that every problem is unique; it is that many custom-looking problems collapse into recurring structures. We are interested in identifying the underlying operation type and reconfiguring it to fit the surface shape. That is where reusability and real product leverage begin.