AI-Powered Writing: 15 Practical Tips
Incorporating AI Tools into Writing: Enhance Efficiency Without Compromising Your Integrity and Unique Voice
A cloud-filled sky is still a writer’s best friend.
Whether it’s for fun, efficiency, or career survival, lots of people I know are using AI assistants and tools for both work and personal use. I'm adapting my writing flow to use AI tools and shortcuts, learning as I go. I mostly use Perplexity, which I picked because I very much like the name, especially since it doesn’t sound human. At the moment, I’m using basic subscription versions of these:
Perplexity for writing and searches
NotebookLM for summarizing/organizing
Otter.ai for speech-to-text
Gamma AI for presentations
Grammarly for basic grammar edits
For many writers, using AI is becoming necessary, but also tricky. How can the change be navigated without losing your voice and integrity?
In this post, I share 15 tips that are helping me incorporate AI tools into my writing workflow. I’m a medical writer, so some tips may not apply to the type of writing you do or the specific AI model you’re using. You may also disagree with some tips, and I may be wrong. Or it might just be a difference of preferences.
I welcome all feedback, corrections, and other points of view. So if you have any (and have time), please share a comment below! Also, I’d love to know if you wrote a similar post like this or read one recently you’d recommend.
Don’t start with broad queries
Broad queries may be fine if you’re looking for a quick-and-dirty gist about something, but I steer clear of this as a starting point for topics I’m writing about.
If I see a whole summarization before I’ve started writing, my brain will begin from an AI-direction that can be hard to shift out of. I want my brain to set the direction after finding all the pieces I’ve grouped together in different baskets. So I use AI to help fill up the baskets and then I’ll use broad queries to see if I overlooked any baskets once I have the bigger picture fleshed out.
Respect privacy (yours and others)
We all know whatever we ask AI is “out there” and no longer private. Still, it’s easy to forget. Avoid using any “personally identifiable information” in your prompts. And be cautious with copy and paste—is it yours to share; do you want it shared; is it legal and ethical to share? If you’re unsure, just don’t give it to your AI interface.
Common sense is a superpower
With whatever response your AI tool pumps out, always ask, “Does this make sense?”
When I started working as an infusion pharmacist in 1999, everything was paper, and the software merely stored and printed labels. Compounding and dose calculations were done by hand and clinical info verified manually using clunky book references. As software advanced and started doing more, applying a solid number and clinical sense remained as important as ever.
Common sense is an essential layer needed to double-check AI-generated content for errors of all kinds. It’s also why experts and generalists complement each other.
Checking biases and blind spots
AI has biases, obviously. All humans (and things made by humans) have biases. Plus, certain groups and view points are over- or under-represented in the publishing and IT industries, which adds even more bias to everything searchable online.
In some cases, AI might break up bias by offering multiple views. Though I’ve noticed this type of output sounds a bit artificially “balanced,” if you know what I mean.
AI might also reinforce your own biases. Just the fact that AI tools are marketed as products developed to please the end user might mean they’re more likely to tell the user what they want to hear. Maybe I’m off on this, but asking AI often feels a bit asking a Ouija board. Who’s really pushing out the answer?
Although AI tools might help you check if you overlooked or mischaracterized something due to your own biases, there’s no replacement for feedback from other humans with different perspectives from your own. AI also can’t resolve that two people can look at the same words or data set differently.
Another blind spot I find in AI-generated content is that it mostly pulls from recent-ish sources. So it might miss something new and just out. AI might also overlook something relevant from five-plus years ago that might be particularly pertinent for some topics. Historical arcs are often pivotal to understanding.
Follow the sources
Don’t use AI as a black box. Follow the sources to find out where the conclusions are coming from, just as you would from any secondary source. Read the references. Make sure they’re supporting what’s being said. And make sure they’re real.
ICYMI: Trump administration 'MAHA' health report cited nonexistent studies
Check out
’s Instagram post on fake AI references!
I don’t exactly know how AI models decide which sources to use. I am curious because the sources aren’t always the most obvious and are often not what I would consider to be good choices. Sometimes they’re not even acceptable because they’re out of date or from salesy or questionable sites that lack editorial layers and fact-checking.
Honing your prompt (see below) can help tailor output from higher-quality sources you tell it to use, but even then, really important key sources might be missing. So always evaluate sources carefully and assume there are other relevant ones to find.
Want to choose your own sources? Google’s NotebookLM is helpful for summarizing and organizing info from your curated list of sources (website links, Google docs, YouTube videos, or copy and pasted text). But again, you need to be familiar with the sources and vet the output.
Focus on readability
AI tools can help you tweak text to make it more accessible or engaging, aligning with the specific audience you’re trying to reach. You can ask for rewording options for purposes of any kind and then mix and match bits into your text—but only add suggestions if you think it’s an improvement.
Be careful: AI might lose something in the process, making the output different from what you’re trying to say. All output needs to be reviewed carefully, even if it’s just a rewording of your own text you fed it.
Hone your prompts
Prompt engineering is a whole new field, and I can only imagine the level of technical aspects that could go into this building prompt libraries and specialized use cases. I still have a lot to learn, and it seems something worth learning.
Trial and error is what I’ve been doing and then saving prompts that work well to use later. Each prompt gets you a piece so layering them is important. Prompting is also iterative within a “conversation,” so ask follow-up questions or for modifications or corrections to responses. Here are some basics:
Be specific: Tailor your question to what you want to know or find to avoid generic output that doesn’t fit the context.
Find a starting point: You can ask for ideas, examples, or options to start, then follow up with prompts that combine or expand on the initial response.
Be bossy: Use one-word, direct commands (do, don’t, create, find, etc.).
Role play: What point of view do you want the response to come from? You can tell your AI model to pretend to be that person by adding prompts like “act as if you’re an expert in …” or “You are a parent looking for …” or maybe it’s you that you describe.
Choose sources: Tell your AI model what to base the response on (recent news stories, published research, competitor sites). Or you can directly ask it to find sources from a specific journal, research area, field, or wherever else you’d like.
Set the scene: For generated text, tell your AI model how you want it (style and tone, reading level, formatting, etc.). You can also give an example of the type of text you’re looking for (as long as it’s sharable).
Set limits: Consider adding limits to the output generated, such as word count, number of examples, or cited sources. You can always ask for more, tweaking the prompt to fine-tune the direction.
LinkedIn may be a good place to find people who talk about query techniques specific to what you’re writing about.
Don’t brainstorm into a rabbit hole
Are you prone to Googling yourself into a rabbit hole? I am. The primitive brain wants to believe some life-saving info or dopamine-spiking thing is just one more click away. So I set a cutoff point. AI can find lots of interesting rabbit holes for me to get stuck in.
Maybe I’ll time-limit myself to say, 15-30 minutes. Or stop when I’m drifting too much or starting to feel overwhelmed, versus openly curious. I keep markers for what I’ve found so I can pick up where I left off or start searching in a new direction after my brain has had time to reset.
Don’t waste your nice
If you don’t like your AI tools to sound nice, you’re not alone. It’s weird and annoying. And there’s no reason for prompts to be polite, either. Instead, direct prompts that sound commanding work better and save energy, your own and the carbon-producing kind. Then you can save all your niceness for the living world.
Note: Not everyone agrees on leaving out the niceties. “There is increasing evidence that how humans interact with artificial intelligence carries over to how they treat humans,” reporter Sopan Deb wrote in a recent NYT article. I think this applies more specifically to kids, but I’m just mentioning as something to consider.
Copy and paste—no, not, never
Publishers and sites are grappling with whether and how to use generative AI (GenAI). Legal, regulatory, and company-wide rules are slowly chiming in, though still trying to catch up. Professional organizations, like the International Committee of Medical Journal Editors, also issue guidance on authorship and AI use. But as a writer, you also have to think about what you feel comfortable with.
When it comes to anything close to GenAI straight up, I say no. I don’t want my name on that, even if a disclosure is added. Here’s why:
You lose your voice.
You risk your trust.
You might plagiarize.
AI may blur lines of authorship, and ethical standards may evolve with technology, but being a stickler for integrity is always best. Without rewording or using quotes, you could be plagiarizing even if you use the corresponding citations. Plus, if the content came straight from AI, then I don’t feel comfortable being the author. And if I am using AI, I want to be transparent about it and say how.
When it comes to factual reporting, educational content, medical writing, or even first-hand essays like this, a gut-check for me is not to write anything I would not say. It might be said slightly differently when I’m writing in a publication’s voice, but the information should be the same I would trust for myself, friends, colleagues, patients, or loved ones.
Expand interview content
Writers are important information gatherers, and interview content is an underutilized source of information that has more salience amid growing GenAI content. In fact, it’s essential to look offline for voices.
Most people consume content but generate zero. Most stories are never heard. Some are even purposefully erased. And some of the most credible thought leaders don’t spread their thoughts online or haven’t even been identified as people with really good ideas who should be leading conversations.
Interviews are also critically important amid changing age demographics. So much knowledge is slipping away. Increasingly, I’m one of the few people (or only one) in a social or working group that has any adult memory of the 20th century. The loss of perspective is palpable as boomers retire and move offstage. At 50, I feel straddled between young and old.
Mixing in elder insight with diverse, youthful foresight is an awesome combination, and one I want to seek out more as a writer whenever possible.
Put yourself “out there” authentically and selectively
Writing is often solitary, and writers often hide behind words (that’s my comfort zone). So it can be hard to put yourself out there if that doesn’t sync with your style or preferences. It may even go against what you were taught. Staying private is what I learned as a pharmacist. Journalists have long been told not to put themselves in the story.
Everyone’s situation is unique, but it does increasingly seem necessary for writers to be more than typed words to help show they’re real and that their words are really theirs and not from bots. Motivation matters too. People want to know “your why”—that’s always been the first thing I wonder about before reading someone’s work. It’s also the reason I can’t imagine AI replacing all writers. So much of the appeal to reading is the human connection.
The other reason to expand into formats off the page is obvious: Purely reading audiences are shrinking by the day (sigh).
Some writers have PR people to tell them when, how, and where, but most writers have to decide on their own. It’s hard because almost every exposure leads nowhere and may feel like unfun work. I have a hard time even imagining how writers have time to be all over socials and have time to create, let alone live.
Your threshold for burnout or a feeling of overexposure may feel different than other writers you know. Mine feels quite low, and I have no interest with living life on any platform. Still, I’m trying to nudge myself out of my comfort zone more when it’s for something I care about and can stay in my own voice.
So if you haven’t already, consider doing a podcast, video, or presentation. If someone invites you, why not say yes? And maybe there’s someone you know who could use a nudge. We all need nudges (as long as they’re not pushy or manipulative).
I haven’t done or watched a Substack Live (but I might). How about you?
Optimizing for AI searches and audiences
Because AI writing tools have clear limitations, writers are having to shift how they write to make their content accessible to GenAI tools. It may be to ensure an article or post shows up in AI-driven searches so your audience can find you. Or it may be because AI is your audience. How to optimize? I’ve only started thinking about this.
The FDA recently announced GenAI will be reviewing submission documents such as new drug applications. She even has a name: Her name is Elsa. This means regulatory medical writers are scrambling to learn how Elsa processes information so they can adapt their writing and formatting to ensure it’s compatible. A regulatory writer once described to me what it is was like getting a whole truckload of papers ready for the FDA to review for a new drug approval. With that many documents, I imagine AI could be hugely helpful.
Question “efficiency”: What’s in it for me or any human writer?
If you’re relying on AI more but you’re getting paid less, who’s benefiting? Or maybe using AI doesn’t save you time without losing quality that you’re unwilling to sacrifice. But the bigger question may be: Are you even able to find any paid writing assignments whatsoever? Editing GenAI content is an expanding area, though decent freelance opportunities of any kind seem to be shrinking.
“Should I stay or should I go?” is ramped up by AI, but mostly feels like a false choice. Few can make a living writing. That’s always been true, but more true now. Publishing problems are bigger than any one writer can change.
So why do it? A lot of freelancer writers are caregivers or have health reasons that make working full-time or in-person harder or not even possible. Freelancing is also the only possible pathway into writing for most people.
Whenever possible, pushing back or respectfully saying no to unfair expectations is the right thing to do. Even if it won’t help you, it may help someone else. Being transparent with other writers about your experiences with AI use, as well as typical freelance rates and expectations is good.
Mentoring is another way to help. I’m so grateful for the help I’ve had (and have) from editors willing to give me a chance, and I want to pass that on to other writers starting out any chance I get. (And no, an AI writing coach is not the same.)

Don’t give up on whimsy
Maybe it’s because my earliest memories are from the 1970s, but I hope to always keep the whimsical vibes from the decade. Whimsy is ever important in the age of AI.
Reviving 70s Nostalgia: Shared Power and Serenity for Modern Times
Flowing hair and hemlines. Folksy, funky music. Bold colors and mismatched patterns. There’s something so human about whimsy. And after half a lifetime spent in efficiency, time-management mode, I’ve realized how essential whimsy is to feeling alive. What’s more, I have the confidence again to let it flow, and not worry what others think—the same way I did as a 70s kid.
I’m certain to learn new, smart ways of using AI in med comms from more experienced writers. I’m super excited to connect in person. I’ll be leading a roundtable discussion called “Making Diversity a Priority in Clinical Trials: The Role of Medical Communicators.” If I sound overly excited, it’s because I am. The last time I presented at a conference was in 1997 when I was in college!
Wow, this was so interesting and helpful. I have been using Grammarly (free version) to check my writing for any mistakes and find it very helpful.
I have tried ChatGPT, but it creeps me out - it is so polite and weird. And as you’ve said not always correct. The other day I had an insurance related question and my husband turned to ChatGPT while I googled for answers. My way turned out to be more time-consuming, but also brought the correct answer (ChatGPT’s answer was wrong).
Oh, also - congratulations on your gig in November! What a great opportunity!