Do you need help & advice with AI & Automation or Cybersecurity?
It’s amazing how easy AI makes it for anyone to create software these days. You don’t need to be a programmer to whip up some code. But, and this is a big but, just because you can build it doesn’t mean you should ship it straight to your customers without a second thought.
The Public Log Problem
I saw this firsthand when I asked an AI to help with logging website results. It churned out code that created a completely public record of everything people entered into the site. The scary part? There was no built-in security. Anyone could have accessed that information. This is a real worry, especially with so many stories out there about hacked accounts and massive unexpected bills because systems lack basic security.
Production vs. The Lab
Messing around with AI code in a test environment, like a lab, is one thing. But putting that same code into a live product that real users interact with is a whole different ballgame. If your website or app handles any kind of confidential customer data, you absolutely need someone who knows their stuff to give it a good look-over before it goes live. Even if you’re just storing a domain name, it’s worth a quick check. For anything more sensitive, it’s non-negotiable.
The Hidden Costs of Vibe Coding
This "vibe coding" approach, while quick, often isn’t very efficient. It can create a lot of unnecessary code, declare the same variables multiple times, and generally be a bit messy. Without someone who understands coding to review it, you might be introducing risks you don’t even see. Putting unvetted AI code into production could seriously damage your business.
Uncontrolled AI Connections
Another growing concern is teams connecting their systems directly to AI tools without any safety measures. Whatever information the AI can access on that system is potentially at risk of leaking. It’s vital that these connections are designed carefully to minimise such risks. Even if you’re not a seasoned developer, taking a moment to review what the AI is doing and asking questions can make a big difference. Building with security in mind from the start is key.
Key Takeaways
- AI can write code, but it won’t take responsibility for security, GDPR, or unexpected costs.
- Always have a security-aware person review AI-generated code before it goes into production.
- Be cautious about connecting systems to AI without proper controls.
The GDPR Risk
Imagine you’ve just put your new AI-assisted feature live, thinking everything’s great. But if you haven’t checked the code properly, you could be facing an immediate GDPR breach. That’s a serious legal and financial headache you don’t want.
A Call for Sanity Checks
So, if your team is using AI to build apps, please, do a proper sanity check. Make sure someone who actually knows what they’re doing has reviewed the code. Don’t let your organisation create "shadow AI" solutions without any security in place. It’s better to talk to a technical person first than to deal with the fallout later. If you’re a business leader looking for more help or have thoughts on AI in business, share them in the comments. Let’s chat about how to use AI to boost profits while keeping risks in check.
