The 2026 viewpoint has been based on whether AI will be "assisting" humans or replacing their abilities, as those specific functions may have been automated.
Before AI existed and had an impact on white-collar jobs, automation had affected non-white-collar jobs such as manufacturing, home services like grinding with a mixer-grinder, and human-aided services in human resources.
However, the results have been mixed for healthcare applications, with common uses: Automating tasks for clinicians to easily manage patients' records, regulatory paperwork, and more. The crux of the matter is that the critical assessment, treatment, and decision-making that physicians do today may be managed entirely "autonomously" by a digital physician AI clone who has learned enough in that area, especially without human intervention. AI or Gen AI is not a magic box but a technology that can run haywire or rogue without essential governance and policies. As an analogy, let us look at traffic lights. We follow them anywhere in the world - red, blue, and green help us avoid accidents in the majority of our cases. Without this standard and "discipline", humans would need to understand each country's specific traffic codes during travel to keep themselves or citizens of the country they visit safe. The ability to work within that framework is important, and technology needs discipline and boundaries.
AI's ability to generate, reason, or work faster is also marred by its equal inabilities, like hallucinating, biases, memory, and recommendation poisoning with biased advice, and this can be part of any closed or open LLM, whether built with a smaller budget or spending multi-million-dollar budgets on improving their engines. Each LLM vendor has a small print like your insurance quotation on "AI making mistakes," and hence any claims to AGI-like systems will require strong policy support, supervision, and patterns where LLM is the judge to have strong domain-specific knowledge or reason to reject them.
In the healthcare world, accountability is a big factor for patients, with confidentiality using a patient-clinician clause during the treatment process, so even if the clinician gets assistance from his/her seniors, advisors, or uses an AI assistant. I don't think a patient will be interested in knowing that a mortality was a result of an AI action.
While building Ask Octo, here are some of the core principles, with the core problem and audience driving the solution, with technology as an enabler
1) We have always used the Architecture Best Practices approaches and have always used a value-based reasoning approach to building things, while we have closed models by proprietary tech giants, which have been built for larger mass consumption with limited fine-tuning and capabilities, there are more smaller models, possibly open sources by the larger tech's own API suites like Gemma, Llama to create smaller footprints with less GPU usage and also supports agentic or non-agentic architecture with a framework to mitigate hallucinations, leaks, biases, poisoning or unsafe uses than intended, and prompt injection.
2) To use an Explainable AI is to provide clear, human-friendly explanations for how AI models make decisions, predictions, or classifications in domains like healthcare, finance, and legal systems, where stakeholders need to trust, verify, and sometimes challenge the decisions made by algorithms. Transparency, which builds trust, supports accountability, and enables better decision-making with fine-tuning model training(weights/bases) into a "deploy anywhere" model based on maturity and democratizing it to an on-premises model that can be deployed to edge/cloud based on domain/research needs.
3) Hardening of the infrastructure, tools, service, and application, audit layers based not only on generic but also on specific medical guardrails and red teaming for any external data sources using RAG or other. For medical content, context is crucial, and even if a source may be trusted or valid, the context may be entirely wrong for that condition and requires a stringent human-in-the-loop (HITL) review.
4) AI solution development for the entire development cycle, from ideation to release or iteration, requires adequate feasibility analysis, both using AI and human skills
Disclaimer- This article has been written by a human and may have errors/omissions. The image has been generated with the help of AI