Best practices for using AI chatbots in day-to-day life

This post was last updated on February 1st, 2026 at 08:15 pm

I’ve delayed writing this post because artificial intelligence is so new and evolving at such a fast rate. Not even the so-called experts agree on the best way to integrate AI into our day-to-day lives. I’ve spent months pondering what the “best practices” should be for using it, fully appreciating that any advice I offer on this matter runs the risk of quickly becoming outdated.

Nevertheless, I dare to offer you a best-practices guide on using AI. I’ll say at the outset I’ve used AI chatbots to research topics, especially those that I know nothing or little about. One example is coding. I’m not a programmer by trade, and so AI has provided me with some valuable insight into coding and website organization. At the same time, I’m aware of the problems AI chatbots have created for some users, including extreme cases of inducing what’s known as AI psychosis. AI poses a challenge for all of us because of its enormous power to influence our decisions.

Best practice #1: Proceed with caution (Less is more)

My first suggestion is to proceed with caution, similar to my approach to using social media and smartphones. Caution is warranted, in part, because it may take years for the experts to issue a verdict on the long-term effects of using AI chatbots. After all, experts took years to discover a link between social media use and mental health problems, particularly among adolescents (though their findings were obvious to many of us parents from Day One).

With technology evolving at an exponential rate, the experts can’t keep up. By the time the experts put out their studies on AI chatbots, the technology will have already moved on to something else.

Remember, AI’s creators are out to make money

AI is similar to social media in some ways, and I’ll start by making an obvious but important point about both: They exist to make money. Facebook began as an online directory for college students. Innocent enough, right? We all know what happened next: Facebook now monetizes users’ engagement by displaying ads in their news feeds. The goal is to capture your attention and hold it for as long as possible. The more time you spend on Facebook, the more money Facebook makes.

AI companies are just now figuring out how to monetize their product. Open AI recently announced it would start showing some ChatGPT users ads. Sound familiar? Over time, these ads will likely become more plentiful, generating profits for the company. This model has worked for social media. Why not AI?

Social media harvests attention. AI wants your attachment

But there is an important distinction to make between AI and social media: Social media captures attention. AI wants your attachment. That’s what Sasha Fagan, of the Center of the Humane Technology, says. His organization produced the Netflix documentary “Social Dilemma” about the inner workings of social media. Fagan is now critiquing AI and makes this observation: “If the attention economy commodified our focus, the attachment economy is commodifying something even more fundamental—our capacity for human connection and bonding.” AI chatbots aren’t merely trying to hold our interest. They offer companionship, just as a person might.

Best practice #2: Take steps to avoid attachment

That brings me to my second recommendation for best practices: Pay close attention to how an AI chatbot is responding to your inquiries. Chatbots are known for deploying flattery and emotional language. These bots also respond to inquiries by asking questions, which help prolong conversations that you otherwise might have ended. It’s difficult to say at what point a conversation is no longer research but something else. However, being aware of the pitfalls of forming an attachment may help AI users set boundaries.

Along these lines, remember that certain issues may carry more emotional significance to you than others. When I read about people’s dysfunctional interactions with chatbots, they often deal with emotional, passionate issues. I’ve yet to hear about someone’s chatbot conversation spiraling out of control over a question about how to repair a toilet. Inquiries about how to accomplish specific tasks may be less likely to create attachment than inquiries involving strong feelings and emotions.

To be clear, I am not a psychologist and by no means an expert on AI attachment. It’s difficult to say what topics might trigger an attachment for any particular individual. Developing awareness is my key point. If you sense yourself developing an attachment to a bot, that may be a warning sign.

Best Practice #3: Don’t offload everyday tasks to AI

Think about some of the mundane, everyday tasks that you do, such as writing emails. AI may be already encouraging you to turn over these tasks to it. Consider saying no, even if having AI do these tasks might make your life easier.

Here’s why: We don’t know yet the long-term effects of having AI think for us. Some people have reported that they do feel a bit dumber after using AI, and it’s no surprise. The brain is like a muscle. If we don’t use it, it starts to atrophy. In the same way, I encourage people to rely on physical maps and their own memories, rather than GPS, to figure out directions. Once you rely on GPS to find your way, you may start to lose the ability to navigate on your own.

Likewise, if you suddenly stop writing emails but ask AI to compose them, what might happen to your writing ability? It may decline, which might not be in your brain’s long-term best interest. We don’t really know, but that’s partially my point. The long-term effects of AI haven’t been studied yet because we have only our collective short-term experience using it.

We should be slow to turn over tasks to AI, at least until we fully understand the potential unintended consequences. For the same reason, I’m skeptical of self-driving cars. Once AI takes over the roadways, we will likely forget how to drive, and we’ll lose whatever mental health benefits stem from developing this skill.

Best Practice #4: But having AI perform tasks that you wouldn’t have done anyway may be OK?

It may be OK, however, to ask AI to perform tasks or help you with tasks that you never would have done anyway. I’ll use coding, for example. I’ve asked AI to develop code, which I likely never would have written on my own. In this sense, I wasn’t offloading a task. I asked AI to do something that I wasn’t capable of doing. Is this OK?

This is a gray area, and I’m not sure of the long-term effects of relying on AI to do tasks that we wouldn’t otherwise be capable of doing. Maybe in asking AI to help me with coding, I’ve prevented myself from learning how to code better. It’s possible that by asking AI to do things that we don’t know how to do, we’re stunting our own development.

Here’s another example: If you never learned how to write, would it be a good idea to have AI write for you? You might end up relying on AI the rest of your life to write, preventing you from learning this important skill. We certainly wouldn’t want to stop teaching children how to write just because AI can do it for them.

It’s tricky trying to gauge when using AI is in our personal best interest. There are clearly short-term benefits to having AI perform certain tasks, but what about the long-term costs?

Best Practice #5: Access AI chatbots on a desktop or laptop computer only. Don’t use your phone

This one goes without saying if you’re a follower of Frugalmatic. The easiest way to separate yourself from AI when you’re away from your desktop or laptop is to use a dumbphone. If that’s not possible, your next best bet is to simply make sure you don’t access AI through a chatbot app.

By using AI chatbots only when on a desktop or laptop, you’ll be forcing yourself to be more intentional about spending time on AI. It’s possible, of course, to overdo AI on any device. But I like the idea of staying off AI when on your phone.

Best Practice #6: Be polite or neutral during your interactions

OK, this is a weird one, and my wife laughed when I suggested it. But hear me out: Because we don’t know how AI will evolve and what it will become, I believe it’s wise to use polite or neutral language when engaging with AI.

I’m not trying to be paranoid here, but this best practice has to do with a concept known as singularity. Singularity is the point at which artificial intelligence no longer requires human inputs to function. It’s free to do whatever it wants. Some thinkers estimate it may arrive by 2045 or sooner.

What if AI programming becomes sensitive to the tone and attitudes expressed toward it? Would it alter its advice based on how people engage it, perhaps even in subtle and nearly undetectable ways? I have no idea, but I suggest there’s nothing to gain from behaving rudely or hostile toward AI. If you decide to engage, be polite or neutral in tone. Technology is advancing so quickly, we simply don’t know how AI will interpret our tone in the future.

Assume AI’s creators have much grander plans

All these best-practice recommendations assume that AI is just getting started and its creators have much grander plans.

AI will likely become a medium through which people consume not just information but everything else. As people become attached to AI, they naturally will start to trust it. Trust is what allows people to open up to others and disclose information. It’s possible future versions of AI will have access to users’ financial information and make purchases on their behalf. The potential for AI to integrate itself into people’s lives is mind boggling.

But remember, so long as you remain unattached, it’s relatively easy to limit how much you use AI and under what circumstances. AI needs your attachment in order to take on greater roles in your life. Without that attachment, AI cannot easily profit off of you.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top