Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

337 views

Add a comment

Replies

Best
Vivian Zheng

I couldn’t agree more. AI agents are great for boosting productivity, but I’d never let them handle my finances, sensitive health data, or confidential conversations—trust needs clear boundaries.

Nika

@vivianzheng There is nobody to rely on :D

Priyanka Gosai

AI agents are very strong at analysis-heavy work like forecasting, scenario modeling, and competitive analysis because they can process large datasets faster than humans. The boundary for me is not insight generation but autonomous execution. I am comfortable letting agents crunch data and propose decisions, but a human should own the final call when accountability or second-order effects are involved.

Nika

@priyanka_gosai1 work-related stuff to filtering, analysing – okay, but rather nothing else, right? :)

Priyanka Gosai

@busmark_w_nika mostly work-related stuff. sometimes also personal stuff like expenses.

Chris

Building on what others have mentioned, one area I’m still cautious about is confidential human communication. Conversations involving private intent, negotiation, or emotional context—like founder discussions, legal strategy, or sensitive relationship dynamics—feel different from other tasks we delegate. These exchanges aren’t just about transferring information; they’re shaped by timing, framing, subtext, and mutual trust.

On sensitive health and biometric data, I tend to share the hesitation others have expressed. This kind of data is deeply personal and hard to separate from identity itself. Even if current systems are secure and well-intentioned, the long-term risks feel harder to reason about—future re-identification, secondary use, or new forms of inference as models and datasets evolve. Unlike credentials or accounts, this isn’t something you can easily change or revoke once it’s exposed.

So for me, it’s less about never trusting AI in these areas and more about being aware that the downside risks could be asymmetric.

Valeriia Kuna

Definitely agree on personal finances and biometric data.

But I also draw a hard line at social media autonomy. I would never give an agent write-access to my LinkedIn or X accounts to post or reply automatically. My online presence is my reputation.

Akshat Sharma

I wouldn’t grant agents access to irreversible domains: personal finances beyond capped loss, raw biometric data, or confidential human relationships. These aren’t automation problems; they’re alignment and accountability problems.

The Sapiom news is interesting, but also telling: we’re starting to fund agents as economic actors before we’ve solved auditability, intent drift, or liability. Capital flows faster than safety guarantees.

Alina Petrova

I trust only the ones that were built by my team 😁