Harriet Farlow’s AI cybersecurity journey to startup leadership

AI security expert and Mileva Security Labs founder Harriet Farlow explains how resilience and curiosity drive leadership and innovation in AI risk management

AI is transforming industries worldwide, and ensuring the security of these systems remains one of the biggest challenges for businesses and governments. This technology is rapidly transforming cybersecurity, creating new opportunities as well as major challenges for leaders and their organisations. 

Harriet Farlow, CEO and Founder of Mileva Security Labs, PhD candidate in AI security at UNSW Canberra, and author of No Starch Press’ AI security handbook, has built her career at the forefront of this emerging field, establishing herself as a thought leader just as businesses and governments are beginning to recognise its importance. 

“AI security is really hard. It's such a new field that, of course, no one's an expert in it yet,” she says. “But I think the main problem is that so much of the time, people are unwilling to even consider the question of why it's important. I often see a culture of sticking your head in the sand.” 

Ms Farlow’s career path – from studying physics and bioanthropology to roles in consulting and government intelligence, and ultimately founding her own company – highlights the value of pursuing curiosity and offers unique lessons for leaders navigating AI disruption.

Harriet Farlow, CEO and Founder of Mileva Security Labs.jpg
If AI security is not taken seriously, nation-state actors are a real threat to AI systems, social structures, and trust in institutions, according to Harriet Farlow, CEO and Founder of Mileva Security Labs. Photo: UNSW Founders

Choosing curiosity over job security in the age of AI 

Do resilience and curiosity matter more than stability in shaping the future of work and technology? From researcher to AI thought leader, Ms Farlow’s career may look like constant reinvention. But she describes it instead as a natural evolution, tied together by her curiosity about the world, a sense of purpose, and attention to personal branding.

Unlike many graduates who pursue careers for stability or predictable job prospects, Ms Farlow says she has always trusted her gut. “I always – from when I picked my degree – basically went with what I was interested in, rather than thinking about employability. In hindsight, I think that was a good thing because all the jobs that were considered safe, like computer science and law, are definitely not safe anymore.” 

By “safe,” she is referring to a current trend: careers that were once widely seen as secure, stable, and low-risk, in fields that traditionally guaranteed strong job prospects, financial stability, and clear professional pathways,  have been completely disrupted by AI. This disruption, however, never threatened Ms Farlow or her career. Instead, she saw it as a unique opportunity to use her skills to create value and have a real positive impact.

After completing a Bachelor of Science in physics and bioanthropology at the Australian National University, she joined Deloitte in 2016, consulting on defence data projects for the Navy and Air Force. Realising consulting wasn’t her long-term path, she describes having a “quarter-life crisis” that led her to Stanford University in 2018 to study astrophysics and data science. She then spent a short period at the University of Sydney’s physics department before moving overseas. A pivotal role at a tech education startup in New York followed, where she delivered training workshops across North America, Mexico, and Canada to help companies adopt emerging technologies.

Learn more: Beyond chatbots: Navigating AI's industrial transformation

“It was work that was genuinely interesting and fun, and felt important. Instead of just solving problems for clients and leaving, it was about teaching them how to solve those problems themselves,” she says. 

When the COVID-19 pandemic brought her back to Australia, Ms Farlow was looking for work. She joined the Australian Signals Directorate, working across three cybersecurity teams while completing a Master of Cyber Security and later pursuing her PhD at UNSW Canberra. She eventually became an acting technical director in their AI Hub, liaising with “Five Eyes” intelligence partners to explore AI security in national security contexts.

“AI security was taken extremely seriously in national security, but back in 2022, most private organisations I spoke with at conferences had never heard of it. And those were the companies most likely to be targeted by threat actors, and who needed it the most,” she says. Realising this gap, she was inspired to found Mileva Security Labs in 2023, named after Albert Einstein’s first wife, a gifted mathematician and physicist in her own right. 

How Harriet Farlow built an AI cybersecurity startup 

She says launching Mileva hasn’t been easy, but it has been deeply rewarding and has given her a sense of purpose. “The original goal was, and still is, to raise awareness about AI security,” she says. "However, this didn’t lend to a very good business model, because it was basically me telling people that they should care about something they’ve never heard of before, and the only evidence for this was classified."

Two-and-a-half years later, businesses are finally paying attention. "Mileva recently won a landmark contract with the Australian Bureau of Statistics to develop the first AI security policy for a government department outside of national security,” she explains. “We are also developing the first mandatory AI security training in Australia, and one of the only in the world.”

Mileva Security Labs recently won a landmark contract with the Australian Bureau of Statistics.jpeg
Mileva Security Labs recently won a landmark contract with the Australian Bureau of Statistics to develop the first AI security policy for a government department outside of national security. Photo: Adobe Stock

But the journey has been far from linear. Even after securing staff, advisors, investment and contracts, the long procurement processes involved, payment terms, and lack of AI security awareness among almost all of the large organisations they worked with meant there was not enough money in the bank to actually pay staff to carry out the work, and Ms Farlow was forced to let most of the team go. 

“It was really hard; I was definitely at rock bottom," she recalls. "For so long, I hadn’t been paying myself – I was paying my staff because I believed that growing Mileva was the most important thing. But I couldn’t afford to live anywhere; I was so burnt out, and we still weren’t seeing the traction we needed. Between being too early to market, and trying to bootstrap the whole thing, Mileva just wasn’t working, and I did consider quitting.”

Instead, she took the opportunity to reset. “Growing a company is hard and scary, but the idea of giving up was so much scarier. This whole experience made it clear that looking after myself is actually the most important thing – if I’m not paid, and if I don’t have the energy to work hard, Mileva will never get anywhere. Now we have really fulfilling consulting and training work, the Mileva brand is becoming more known and respected, and I prioritise myself. Not only am I really happy, but it’s setting Mileva up with a much more sustainable business model.” 

For Ms Farlow, her resilience comes from having a strong purpose. “With Mileva, the mission was always raising the awareness of AI security, and so, however bad of a job I feel like I might be doing, or the company might be doing, that mission is really important to me," she says. “If people don’t take AI security seriously, nation-state actors will be able to hack us. Not just our AI systems, but our social structures, our trust in institutions. It’s not just a technology risk but an existential one. We have to get it right.” 

Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School

She also rejects the Silicon Valley “cookie-cutter” model of startups; instead, she set out to build around her own mission and personal brand. Throughout Mileva’s growth, Ms Farlow has still been completing her PhD and writing a book on AI security with prestigious cybersecurity publisher No Starch Press. Juggling so many projects is not easy, but a matter of prioritisation and persistence. 

“I still believe the only reason your company is going to fail is if the founder stops," she says. "There’s always a pivot to make. You just have to be willing to keep going and take every pivot until you find something that works.” 

Lessons for businesses in AI cybersecurity, culture, and leadership 

Ms Farlow points out that both technical and cultural issues are shaping how organisations today are responding to the proliferation of AI. For example, one of the most common misconceptions she often hears is that security can simply be delegated to vendors. “When I’m talking to potential clients, and I mention AI security, a lot of the time, the response is: 'Oh, we use Microsoft. Microsoft is responsible for our security,' – which is not true.” 

Her recommendation for businesses is that every employee, no matter their role, should have access to clear, reliable information and well-defined AI use policies and AI security information. “People need to know what they’re allowed to put into ChatGPT, what they’re not, and why,” she warns.  

Ms Farlow also routinely urges leaders to “risk-assess every tool,” including enterprise products that introduce AI features by default. She advises businesses to document systems clearly, ensure transparency about data flows, provide ongoing training so staff can adapt as their roles change, and define who is responsible for AI security and governance

Learn more: Why AI systems fail without human-machine collaboration

For Australia, she sees cultural attitudes as another hurdle. “Even if we have great opportunities [and] very smart people, our culture of risk aversion and tall poppy syndrome makes a massive impact on how innovative our nation can be compared to other regions in the world.” 

Her message to professionals entering or pivoting into AI cybersecurity is not to undervalue their existing skills, with genuine interest being an important factor for success. “AI security is so new that anyone with passion can bring their own unique gifts and contribute to it. We have an opportunity now to shape this field to be more creative, equitable and impactful than technical fields that have come before it. I believe that’s what it means to be a ‘hacker’. It’s about thinking outside the box and being willing to question the status quo,” she says.  

Republish

You are free to republish this article both online and in print. We ask that you follow some simple guidelines.

Please do not edit the piece, ensure that you attribute the author, their institute, and mention that the article was originally published on Business Think.

By copying the HTML below, you will be adhering to all our guidelines.

Press Ctrl-C to copy