Rather like with the smart home, when it comes to AI, I’ve taken the second wave approach — I let the early adopters get well and truly burned, and give the developers a chance to make a meaningful start at fixing the biggest problems, before slowly dipping my toe in. Compared to non-geeks, I’m still an early adopter, but to the geeks, I’m a slow coach! I don’t regret choosing that route for home automation, and I’m happy I chose the same approach for AI.
I’m not ready to even consider agentic AI yet; that’s still the wild west, so my focus has been on generative AI. My biggest concern was privacy, because, just like with search, there’s no such thing as free AI. You will pay, the question is simply how — with your privacy, your money, or a little of both?
A Safe Starting Point
I started my generative AI journey in work rather than with personal stuff, and that let me get started without risking any of my privacy. In corporate settings where it’s corporate data rather than personal data that’s at stake, there should be clearly defined rules, and any products provided for staff should be appropriately configured to keep the organisation safe. It’s probably wildly optimistic to assume that’s true in all organisations, but it is true where I work, so I got to start my AI journey in a safely constrained environment, limited to:
- The enterprise version of Microsoft 365 Copilot with full privacy protections enabled
- Apple Intelligence without the ChatGPT connector (blocked via MDM)
- GitHub CoPilot via my corporate identity
- Other LLM integrations that comply with the GDPR are used carefully to adhere to our corporate data protection policies
It didn’t take long for me to realise that I wanted the power of LLMs everywhere, not just in work. So, that’s been my mission for the last few months — adopt AI without sacrificing more of my privacy than I’m comfortable with.
As with all other privacy decisions, I didn’t end up making an absolute decision to only pay with money, or only with data; it really was a yes-and thing for me. Neither choice is universally right or wrong; what matters is that you make deliberate and informed choices!
What did I Learn from Work?
- For some types of questions, AI chatbots give a better experience than traditional search engines, even ad-free paid search engines.
- In-context task-specific AI assistants can amplify your productivity
- For basic text processing tasks like summarising and re-wording, Apple Intelligence is all I need
After half a year, these are the tools I’ve fallen in love with in the office:
- The Microsoft 365 Copilot sidebar in Edge has become my go-to chatbot for generic queries
- GitHub Copilot, integrated into VS Code, has become my go-to coding assistant
- The WARP Terminal‘s built-in AI has become my go-to terminal assistant
- When I want help writing things, I use the Writing Tools Apple Intelligence has added to my Macs
The Twin Questions — The Tool’s Policies & Your Data
To make informed decisions, you need to figure out two things:
- What does the AI tool do with your data?
- What data will you be processing with this tool?
These two questions really interact with each other — at the one extreme, you have tools like Apple Intelligence, which encrypt everything end-to-end so Apple never even see your data, let alone have the opportunity to use it in some way you don’t want. With that kind of tool, you can safely use it to process absolutely anything, even the most sensitive data. At the other end of the spectrum, you have the free versions of the major commercial LLMs like ChatGPT. If you give something to the free version of ChatGPT, it’s entirely fair game for them to use as they see fit. After all, that’s the deal they offer — you get to use these colossally costly tools for zero money, and in return, your data helps feed the company’s insatiable appetite for data!
These two questions also have some important nuance.
It’s About More Than Just Your Prompts!
Always remember the question is not just what does the tool do with your prompts, but what does it do with all the context you provide around your prompts? Your prompts are front-and-centre, so you’ll never be in any doubt about those, but for many of the more integrated tools, the context they also ingest can be much less obvious.
When you ask an LLM to summarise some text, that text is the context you’re providing. Pretty obvious. The same is true when you attach some kind of file to your prompt. You know you’re sharing that file with that tool because you’re doing it explicitly. What you need to watch out for is implicit context.
When you use an AI tool that offers text completions, it must constantly ingest the text you’re working on in order to power those suggestions, and not just the text above your cursor, but also the text below! Depending on the tool, it might even be a lot more than just the current document that gets shared; it could be an entire project!
Coding assistants are very similar to writing assistants, but you can almost guarantee that entire projects are being ingested as context.
Browser plugins can also subtly share more than you realise. Some of these plugins use the content of all your open tabs as context, including from tabs that are signed into cloud services like mail clients, financial services, and healthcare portals!
Finally, AI assistants that you connect to your mail, contacts, and calendars can use any of that data as context at any time.
So — while you might thing it’s always obvious what it is you’re sharing with AI companies, it really isn’t! And, when you’re reading an AI company’s privacy policy, remember to pay attention to the bits where they tell you what all they ingest as context, and when it gets to the bit that describes how they use your data, don’t just focus on how they treat your prompts. Those are usually less important than how they treat additional data that comes along for the ride!
The Tools I’ve Adopted and the Tradeoffs I’ve Chosen
Writing Tools — Apple Intelligence
Let’s start simple — the only writing tools I use are Apple’s. I don’t actually use text tools often, because for me, writing is my mechanism for structuring my thoughts, so I absolutely don’t want to be ‘relieved’ of that task. I see having an LLM write my emails for me as a bug, not a feature! However, intelligent sentence completion is actually useful, as is the ability to get useful summaries of tediously long missives from others, especially eternal CC chains I get tacked onto the end of with the world’s most inconsiderate one-word request — “thoughts?”!
So, since Apple Intelligence’s native features offer truly excellent privacy protections, and since I do all my computing on Apple devices, that really is all I need!
Note that I have not opted into connecting Apple Intelligence to ChatGPT.
Chat Bot — Lumo
In work, I’ve found Copilot’s Edge browser integration to be invaluable for a few reasons:
- It can search the Office 365 data I have access to, as well as the broader internet, and combine both of those data sources into the world knowledge it gets from its ChatGPT back end to give extremely relevant answers
- It clearly cites its sources so you can make informed judgments about how much or little credence to give each answer
- It can use your browser tabs as context
- It can quickly and easily open links right there in your current window
As much as I would love to have all this in my personal life, I’m just not prepared to sacrifice all that contextual data for the privilege, and as an individual, I’m not prepared to pay the full price for an enterprise Office 365 license that covers Copilot.
So, I settled for a different chatbot — one with excellent privacy guarantees, but not a browser plugin. I need to use it in a stand-alone app on iOS, and in its own dedicated tab or in a site-specific browser on my Macs. I chose to pay for the pro version of Lumo from the Swiss privacy-focused services company Proton, of Proton Mail fame.
Lumo has a very clean and simple interface, enhanced with a really cute cat icon, but it’s model is very capable, and I like that it doesn’t reach out beyond it’s model to the general internet unless you ask it to, and then it very clearly lists it’s sources, which is extremely important to me.
The model is very capable, but it’s not as good as the absolute most recent models from the big companies; it’s also not that far behind the cutting edge. I’d estimate it to be about a year or so behind, at most. To me, it’s not just good enough, it’s genuinely good! In fact, I often find that it gives me better answers than GitHub Copilot when my questions are about more general programming concepts rather than about the specific piece of code.
Coding Assistant — GitHub Copilot
GitHub Copilot integrated into VS Code works superbly for me in work, and I have a paid personal GitHub account, so this one was easy — I just do at home what I do in work!
Note that I’m not paying for an enterprise GitHub account, so all the code I edit is being used as content, and it, along with my prompts and the completions I accept, can be used by Microsoft and OpenAI to train their models. Depending on the codebase you’re working on, you might find that intolerable, but I consider it an acceptable tradeoff for two reasons:
- None of the code I work on is proprietary, so I don’t have any intellectual property concerns.
- I am religious about never coding any secrets of any kind into my code! None of my scripts have passwords in them, instead, those kinds of things are all passed to the scripts in some way at runtime. If you ever find yourself adding passwords into scripts, stop, you’re doing it wrong!
I plan to dedicate an entire PBS tidbit to getting the most out of GitHub Copilot in VS Code because I’m finding it to be an excellent force multiplier. It’s absolutely no replacement for understanding, expertise, or experience, but it can really help experienced devs get a lot more done a lot more efficiently! And, if used carefully, it can help developers at any level develop and expand their skills.
Smart Terminal — WARP
I started to use the WARP terminal before it expertly incorporated AI simply because it has a full-featured modern interface with 21st-century text input and powerful contextual menus. Simply being able to right-click any command and choose to copy just the command, just the output, or both, is a surprisingly rewarding experience!
But, over time, the developers have added deeply integrated support for AI that super-charges my WARP terminal experience every bit as much as GitHub Copilot does my coding experience.
The similarities don’t end there, though. This is another example of a tool that I’ve made an informed decision to pay for with data rather than with money. Like with GitHub Copilot, questions I ask and the commands I run get shared with the developers in exchange for those AI features.
Again, my two reasons for accepting this tradeoff are:
- I don’t process sensitive data on the terminal
- I am religious about never including secrets of any kind in my actual commands; instead, I enter them into secure prompts or read them from the environment.
For some commands, this is easy — for example, sudo never expects your password to be entered as part of the command, but asks you for it using a secure input when you run it. A little less obviously, while you can include database passwords in mysql commands, you don’t have to, and you really shouldn’t. Just use the -p flag without a value, and you’ll get a secure prompt for entering the password!
For commands that don’t offer that kind of thing, you use the read command to securely read secrets into shell variables and then use those variables in your commands. That way, neither your terminal history (which gets saved to a plain text file!), nor any context that gets sent to the WARP back-end ever contains my secrets.
If you are the kind of person who happily includes passwords in terminal commands, stop, you’re doing it wrong! And definitely don’t use any AI tools that don’t guarantee full privacy until you do!
Given how important it is in our modern AI era, I’m giving serious consideration to writing a new Taming the Terminal instalment dedicated to safely handling secrets on the terminal.
I Still Use Kagi!
Just because I’ve added AI tools to my toolset doesn’t mean I’ve abandoned my privacy-respecting paid search provider. It’s only been a few months ago that I shared my love of Kagi and how refreshing I find having a search engine that sees me as the customer, not as the product!
I’ve found that it’s a matter of using the right tool for the job. There are some questions Kagi answers better and faster than anything else, and there are other questions where Lumo’s answers are more useful than even the page of links answer could ever be.
The Right Tool for the Job
When I need to find a web page, I use a search engine, and when I need to answer a question, I use a chatbot.
Where Search Engines Still Rule
Search engines maintain very up-to-date indexes of all the pages on the public internet. When you need to find one of those pages, search engines work best. For me, that means searches like:
- News articles related to specific stories or topics — Kagi’s news tab with my website customisations works great for this, because I’ve told Kagi which news outlets to favour, and which to omit. I don’t want a summary; I want to read what journalists I trust have to say, or I need to find the best link to share or use in show notes.
- Specific websites — when I want to get the official site for an event, an organisation, or a product, a search engine is perfect.
- Specific documentation — when I know I want the official docs for a specific product, service, feature, API, function call, etc. I find chatbots infuriating. I don’t want to trust a machine summary, I want the actual docs!
Where Chatbots Work Better
On the other hand, when I don’t want a URL or a page, but an answer to a question, I’m finding Lumo and Microsoft 365 Copilot to be far more effective. For me, these are the kinds of things I ask the bots:
- Language questions — by their very nature, Large Language Models know language better than anything else.
- What’s the difference between these related words or concepts?
- What does this phrase mean?
- How is this word used?
- Technical comparisons — think of these bots as being the average of every description and opinion, broad generalisations are their specialty, while reading 5 or even 10 detailed reviews or overviews is tedious!
- How does one tool compare to another?
- What kinds of tools do people use to do some task?
- What’s the difference between these confusingly similar protocols, concepts, or APIs?
- Technical questions — LLMs have ingested the official docs for just about every product out there. When you need a specific answer about a complex product, the LLM can get you there very efficiently, but always remember to check the source it cites before doing something potentially destructive or costly!
- Software Licensing questions — I know exactly where to find the docs for Microsoft’s massive suite of products, and I know that if I read all those docs, they will contain the answers to all my questions, but man, they are dense! I really want something else to ingest and digest all that, and just tell me whether or not the feature I want is covered by the license I have, or the list of licenses that provide some feature! Obviously, this is where Microsoft 365 Copilot shines!
- Permissions questions — like with software licensing, I know where to find the docs that detail the exact roles that permit access to each tool, feature & API call within Microsoft’s massive suite of tools, but man, it gets complicated, especially for cybersecurity features because they span just about every product line. For example, the ability to release an email from quarantine can be granted to users based on their Defender roles, their Entra ID roles, or their Exchange Online roles. The service desk should probably get roles from one source, the sysadmins from another, and the cybersecurity team yet another. Being able to ask Copilot to list all the different options for a single specific feature saves me the need to read hundreds of thousands of words!
- Interface/Setting locations — the OS-level settings for our OSes are bad enough, and I love being able to ask a chatbot where on earth Apple have hidden the setting to control some behaviour that I know is supported but could be just about anywhere in that maze of settings. But as bad as overwhelming as the iOS settings are, they’re nothing, nothing, compared to the maze of settings in Microsoft’s product suite! Do I manage this from the Azure portal, the Entra ID portal, the Defender portal? the Exchange Online Admin portal? And most importantly, where in those sprawling portals are the needed settings hidden‽
- Function/API/Plugin options — there’s rarely just one single way to achieve some end when working with modern programming languages, so asking for a list of the available choices can be extremely helpful, and when the the docs don’t immediately suggest which is the best fit for your exact needs, you can always ask some follow-up questions.
- Cooking Questions — recipe sites have become some of the worst examples of what’s wrong with the modern internet. When I search for recipes I find myself on sites with more of the most obnoxious ads you could possibly imagine, and the same handful of recipes plagiarised word-for-word over and over again, often hidden far down a page after some AI generate glop with an obviously fake back-story prepended to it. Asking Lumo for a little help in the kitchen has been a breadth of fresh air!
- In what cuisines is some ingredient traditionally used?
- How long should I roast this vegetable at 200ºC?
- What herbs and spices pair well with this ingredient?
- I have this and that, what could I make?
- What’s the best way to prepare this ingredient?
- I have a recipe that calls for this thing I don’t have, what could I use instead?
- I have a recipe for oven roasting this thing, but I need to use an air fryer today, how do I convert the time and temperature?
A Few Examples
These are anecdotes, just to give you a flavour of my experiences. They’re just little snapshots from times I found the answers helpful and when I had the time and the opportunity to capture them.
Microsoft 365 Copilot Decoding Microsoft Licensing
I could have spent half a day figuring this out, easily, but with a little help in my Edge sidebar I got there much more quickly!
![[Screenshot — Copilot Example.png]]
Notice the drop-down of sources, with the most heavily used one pre-selected, and the very useful suggested follow-up prompts that can be clicked on to pre-populate the chat box.
Lumo Helping Me Distinguish Between Similar Terms
I was working on slides for a MUG talk I gave recently and found myself confused about whether I should use phone camera or camera phone — Lumo cleared up my confusion perfectly. I needed both, depending on the side’s context:

The WARP Terminal Helping e on the Terminal
Certificate Authority Authorisation records, or CAA, DNS records are both very important and very obscure. You generally set them and forget them, so years can go by without needing to interact with them. I could absolutely have figured this all out myself, but it would have meant some time in the dig man page, and some time re-reading the CAA DNS record specs. But I was saved all that work by the AI smarts built into the WARP terminal app:
![[Screenshot — Warp Terminal — Terminal Command Example.png]]
Notice that the first thing WARP did was explain the command it was about to offer, then it offered me the command, which had a button to push to run it, which I did, resulting in the tick box before the command to indicate it ran successfully, and the chevron on the right to expand the box to show the raw output from the terminal. But I didn’t need to see that raw output because it explained it to me, very clearly, telling me exactly what I needed to know.
Some Final Thoughts
- Don’t just blindly use AI tools; think about the information you’re sending to them, and learn about how those tools use that data.
- Always remember, LLMs don’t know what they don’t know! They always sound confident and well-informed, usually with good cause, and they regularly get things wrong, subtly or utterly!
- I avoid LLMs that don’t show me their sources — the most important, the question, the more time I spend digging into those sources for myself!
