
A senior identity engineer who builds authentication infrastructure for one of the world’s most consequential identity platforms reviewed the MINDCODE 2026 submissions and concluded that the field is rushing to build clinical features on top of an identity layer that has not been designed to carry the weight those features will eventually place on it.
There is a layer of every consumer software product that the user almost never thinks about and that almost never appears in the marketing material. The layer answers the questions of who the user is, how the system knows it is them, what happens when the user cannot prove they are them, who else is allowed to act on the user’s behalf, and what happens to all of those answers when the relationship between the user and the product changes — when the user dies, when the user becomes incapacitated, when the user loses their phone, when the user changes their email address, when the user is in a moment in which the act of authenticating is itself the obstacle between them and the help they need. The layer is identity. The discipline that builds it is identity and access management. And the discipline has spent the last fifteen years working out, often the hard way, that the cost of getting these questions wrong is borne by the user at the moment they can least afford it.
Nandagopal Seshagiri has spent his career inside that discipline. As a Senior Software Architect at Okta, his practice has centered on building secure, scalable identity solutions for an authentication platform that handles billions of sign-in events for tens of thousands of customer organizations. His work spans the design of authentication systems, the architecture of session management, and the operational reality of identity federation at scale. When Hackathon Raptors invited him to serve as a judge for MINDCODE 2026 — an international 72-hour hackathon focused on software for human health — he found a category of system that had inherited the consumer-software identity playbook by default, without anyone on the building teams having paused to ask whether that playbook was the right one for the domain.
“Mental health software is being built on top of the same identity primitives that ride-share apps and food-delivery apps use,” Seshagiri observes. “Sign in with Google. Sign in with Apple. Email and password with optional second factor. Session that lasts until the user signs out or the device forgets them. None of these primitives are wrong in themselves. The problem is that they were designed for relationships in which the worst-case failure mode is that the user has to re-enter a credit card. In mental health software, the worst-case failure mode is that the user cannot reach their care record in the moment they most needed to, or that the wrong person can. The identity layer has to be designed for those failure modes, not retrofitted to handle them after the product is already in users’ hands.”
The Account Recovery Problem in a Care Context
A pattern Seshagiri saw repeatedly across the MINDCODE submissions was the casual treatment of account recovery — the discipline that determines what happens when a user cannot prove they are themselves to the system that holds their data. In most consumer software, account recovery is a moderately annoying inconvenience: the user clicks “forgot password,” receives an email, sets a new credential, and resumes their relationship with the product. The friction is real but bounded. The user is locked out for minutes or hours, not days, and the loss of access is reversible.
“In a mental health context, that friction model breaks in a way that the consumer software industry has not yet had to confront,” Seshagiri notes. “If the account recovery flow requires email verification, and the user cannot access their email because they are in a hospital, in a shelter, in a controlled environment, or in any of the other circumstances under which users of mental health software actually find themselves — the system has effectively locked them out of their own care record at the moment when continuity of care matters most. That is a design failure with clinical consequences, not a UX inconvenience. It needs a different recovery architecture than the one consumer software has standardized on.”
His recommendation in this space was concrete and rooted in identity engineering practice. Build the recovery flow first, before building the credential flow. Decide, before the user creates an account, what the system will do when the user cannot prove they are themselves but can prove some other thing about themselves — when they can answer a question only they would know, when they can produce a recovery code generated at enrollment, when they can be vouched for by a designated trusted contact, when they can be re-onboarded from a partial identity proof rather than a full one. The recovery architecture is the foundation on which the rest of the identity layer rests, and in mental health software it is the foundation that is most consistently absent from the submissions Seshagiri reviewed.
“The strongest projects in my batch had thought about what their product would look like to a user who had lost their phone and was sitting in a clinic waiting for help,” Seshagiri observes. “The weakest had built a flow that assumed the user always had their phone, always had their email, always had access to whatever second factor they had enrolled at signup, and would never be in a moment in which the friction of identity proofing was the obstacle between them and the help the product was supposed to provide. That assumption is wrong for this domain in a way it is right for almost any other consumer category.”
Identity Federation and the Relying-Party Question
A second pattern that drew Seshagiri’s attention was the way several submissions handled identity federation — the practice of letting the user sign in with an existing identity provider like Google or Apple instead of creating a new credential specific to the app. Federation is, in most consumer contexts, a straightforward usability win. The user does not have to remember a new password. The team does not have to manage credential storage and reset flows. The identity provider handles authentication; the application handles authorization. The architecture is well-understood and broadly considered a best practice.
“Federation is the right answer for most consumer products,” Seshagiri observes. “Mental health software is one of the categories where it is a much more delicate decision than the team is usually aware of when they make it. The question is not whether Google or Apple can authenticate the user reliably. They can. The question is what the relying-party relationship implies for the clinical record the user is about to entrust to the application. When the user signs in with Google, the application is now in a federation relationship with a third party who has visibility into the existence of the relationship — the fact that this user has an account with this mental health product. Depending on the user’s threat model, that visibility may itself be a privacy concern that the user did not knowingly consent to.”
His observation was structural rather than alarmist. The point was not that identity federation is wrong for mental health software. The point was that it is a decision that deserves a different level of deliberation than the team usually gives it. Some users — perhaps most — will be best served by the convenience of federation and the operational reliability of a managed identity provider. Other users — particularly those whose relationships with mental health systems are themselves sensitive, contested, or dangerous — will be best served by an option to create a credential that does not connect their care record to any other identity they hold elsewhere on the internet. The architecture should support both, and the user should be told, in language they can understand, what each choice implies for the ledger of who knows they are using the product.
“The teams that did this well offered the user a clear choice at signup,” Seshagiri notes. “The teams that did it poorly defaulted to federation without explaining what the federation implied, and gave the user no path to a non-federated identity later. The first pattern respects the user’s threat model. The second pattern assumes the user has the same threat model as the team designing the product, which in this domain is almost never true.”
Session Management for Users in Distress
A subject Seshagiri returned to throughout his reviews was the question of session management — the policies that determine when a user is considered actively signed in, when they are signed out automatically, what happens to the session when the device is left unattended, and how the application balances the convenience of staying signed in against the security of an idle timeout. Session policy is one of the most operationally consequential parts of an identity system, and in most consumer software it is tuned for convenience — long sessions, infrequent re-authentication, minimal interruption of the user’s workflow.
“Mental health software needs to think about session policy as a clinical safety question, not as a user-friction question,” Seshagiri argues. “If the user is on a shared device — a public library computer, a phone borrowed from a family member, a hospital tablet — the session policy determines whether the next person to use the device sees the user’s previous mental health interaction. If the user is in a moment of acute distress and walks away from the device without logging out, the session policy determines whether anyone who picks up the device next sees the conversation the user just had. The default of long, persistent sessions is dangerous in this domain in a way it is not dangerous in most other consumer domains.”
His recommendation here was to approach session policy as a per-domain design decision rather than a default to inherit. Mental health applications should have shorter session timeouts than commerce applications. They should have explicit logout flows that the user is gently nudged toward at the end of an interaction. They should treat session resumption with skepticism, requiring lightweight reauthentication when a session has been idle for a meaningful period. None of these patterns are technically difficult. All of them are absent from the consumer-default session model that most teams inherit when they bolt an identity provider onto their application without thinking about what the inherited defaults imply for the population they are trying to serve.
“The session is a trust boundary,” Seshagiri observes. “In most consumer software, the trust boundary can be relaxed because the cost of getting it wrong is bounded. In mental health software, the trust boundary is operating in an environment where the user may be the most vulnerable they have ever been at the moment the session is being maintained. The session model has to acknowledge that, or the product is making a security promise it cannot keep.”
The End-of-Relationship Problem
A theme that ran through Seshagiri’s reviews — and that he flagged as the most consistently underdeveloped part of the identity architecture in MINDCODE submissions — was what he described as the end-of-relationship problem. Every consumer product eventually ends its relationship with every user. The user closes the account, the user dies, the user becomes incapacitated, the user transitions out of the life circumstance that brought them to the product, the user’s care needs change, the user’s identity is contested by a family member or a clinical guardian. In most consumer categories, the end of the relationship is operationally simple: the data is deleted, the account is closed, the relationship terminates. In mental health software, the end of the relationship is a question with no default right answer.
“What happens to a user’s care record when the user dies?” Seshagiri asks. “What happens when a family member presents a death certificate and asks for access? What happens when a court orders the data preserved as part of an inquest? What happens when a clinician needs the record to inform the care of a surviving relative? What happens when the user transitions from voluntary care to involuntary care and the identity that was created under the first circumstance is no longer the identity the user can reliably control? None of these questions had answers in the submissions I reviewed. All of them will eventually have to.”
His recommendation was that mental health software teams write down their end-of-relationship policy before they write their privacy policy. The end-of-relationship policy is what determines whether the privacy policy is enforceable in the situations that actually matter. Without it, the privacy policy is an aspirational document that will be tested by the first user who dies, the first user who becomes incapacitated, the first user who is in a divorce proceeding in which the mental health record becomes evidence, and the first user whose care provider needs to access the record to inform a clinical decision the application did not anticipate.
“The identity layer is the part of the system that has to know the answer to these questions,” Seshagiri reflects. “If it does not, the answer will be improvised, and improvised answers in this domain are how trust is permanently lost. The teams that scored highest with me had at least begun to think about these questions, even if they had not yet built the architecture to answer them. The teams that scored lowest had not begun to think about them at all, and were building features whose long-term operational reality was being deferred to a version of the team that did not yet exist.”
What the Strongest Submissions Demonstrated
The submissions that Seshagiri rated highest shared a quality his identity engineering background made impossible to ignore. They had treated the identity layer as a load-bearing part of the product rather than as a checkbox to satisfy on the way to the clinically interesting features. They had thought about account recovery as a clinical safety problem. They had treated identity federation as a user choice rather than as a default. They had tuned session policy for the population the product was actually designed to serve. And they had at least begun to confront the questions about what happens when the relationship between the user and the product comes to an end in any of the many ways such relationships actually end.
“The teams that produced submissions I would feel comfortable seeing in production,” Seshagiri notes, “were the teams whose identity architecture was built for the failure modes that mental health software actually has to handle, not for the failure modes that consumer software has standardized around. The teams that produced submissions I would not feel comfortable seeing in production had built thoughtful clinical features on top of an identity layer that would, in production, fail the user in exactly the moment the clinical features were supposed to help. The first group was building infrastructure for care. The second group was building clinical features without infrastructure to support them.”
His closing observation was deliberately practical. The identity engineering disciplines that the broader software industry has developed over the last decade are not secrets, and they are not difficult to apply when the team designing the product takes the trouble to apply them deliberately. The reason they are absent from most mental health software is not that they are technically out of reach. The reason is that the teams building mental health software are mostly inheriting consumer-software identity defaults without pausing to ask whether those defaults serve the population they are trying to reach. The cost of that inheritance is being deferred to a version of the product that does not yet exist, and to users whose moments of greatest need will be the moments at which the deferral is paid back in full.
“The identity layer is the part of the product that promises the user the system will know who they are when it matters,” Seshagiri reflects. “In this domain, that promise is more consequential than almost any other promise the product makes. The teams that took it seriously produced systems I respected. The teams that did not produced systems I would not yet trust with a user’s care record. That gap is the variable I want this field to start closing, because it will close itself the hard way if the field does not close it deliberately.”
MINDCODE 2026 — Software for Human Health was an international 72-hour hackathon organized by Hackathon Raptors from February 27 to March 2, 2026, with the official evaluation period running March 3–14. The competition attracted over 200 registrants and resulted in 21 valid submissions across the mental health and wellness domain. Submissions were independently reviewed by a panel of judges across three evaluation batches. Projects were assessed against five weighted criteria: Impact & Vision (35%), Execution (25%), Innovation (20%), User Experience (15%), and Presentation (5%). Hackathon Raptors is a United Kingdom Community Interest Company (CIC No. 15557917) that curates technically rigorous international hackathons and engineering initiatives focused on meaningful innovation in software systems.

