From spoofed CEOs to deep fakes
IN Partnership with
Why employees are your best weapon against AI threats
More
ARTIFICIAL INTELLIGENCE is a double-edged sword for organizations and their people. While the opportunities around generative AI are essentially boundless, so too are the risks and the challenges – because for every new advancement AI gives to businesses, it offers the same leg up to cyber criminals.
Research from AAG found that data breaches cost businesses an average of $4.88 million in 2024, with one billion emails exposed in a single year – affecting 20 percent of all internet users. Despite this, data from StrongDM highlighted that just 17 percent of small businesses have invested in cyber insurance – a risk that could inevitably come back to bite them.
“We’ve seen an increase in ransomware activity that utilizes deep fakes and generative AI-enabled social engineering methodologies,” explains Jeff Kulikowski, executive vice president, cyber and E&O at Westfield Specialty. “The sophistication of these social engineering attacks has increased exponentially in the past 12 to 18 months.
Westfield Specialty is a prominent global specialty insurance carrier, leveraging the financial strength of Westfield, a leading US-based property and casualty insurance company. Lloyd’s of London Syndicate 1200 is part of Westfield Specialty, which was acquired by Westfield in 2023.
Westfield Specialty currently underwrites 10 lines of business in the US, 14 in the UK, five in Dubai, and has over 400 employees globally. Our experienced team brings deep expertise to the specialty market and offers unique insurance solutions for specialized risks that helps protect businesses, recover losses, and assist in driving growth for clients.
Since establishment in 2021, Westfield Specialty has grown quickly, expecting annual GWP to reach $2 billion globally in 2025.
“We’ve seen an increase in ransomware activity that utilizes deep fakes and generative AI-enabled social engineering methodologies”
Jeff Kulikowski,
Westfield Specialty
Share
Rise of social engineering scams
Published May 19, 2025
Share
Contact Us
Specialty
Best in Insurance
Resources
Risk Management
TV
News
US
Copyright © 2025 KM Business Information US, Inc
RSS
Sitemap
Contact us
About us
Conditions of Use
Privacy policy
Terms & conditions
People
Contact Us
Specialty
Best in Insurance
Resources
Risk Management
TV
News
US
Copyright © 2025 KM Business Information US, Inc
RSS
Sitemap
Contact us
About us
Conditions of Use
Privacy policy
Terms & conditions
People
Contact Us
Specialty
Best in Insurance
Resources
Risk Management
TV
News
US
Copyright © 2025 KM Business Information US, Inc
RSS
Sitemap
Contact us
About us
Conditions of Use
Privacy policy
Terms & conditions
People
“We’re researching key indicators of deep fakes. For example, to deter email spoofing, we have insureds look to identify any issues with the text”
Jeff Kulikowski,
Westfield Specialty
Whether it’s video conferences or audio calls generated by AI, they’re nearly unrecognizable to employees. [As such], we’ve encouraged all of our insureds to increase their awareness of this evolving threat and address their cyber risk insurance concerns.”
And the data certainly aligns with Kulikowski’s professional viewpoint. Research from Verizon found that one-third of the data breaches in 2024 involved ransomware or another extortion technique, with 67 percent of IT professionals adding that the rise of generative AI has increased their fear of being targeted by a ransomware attack. And the types of attacks are only multiplying.
“A lot of [criminals] are using social engineering right now,” says Kulikowski. “We’re also starting to see more attacks utilizing AI technology to infiltrate client networks and spread malware more efficiently than ever before.”
Deep-fake technology is also on the rise – with criminals using new audio and visual tools to pressure unwitting employees to make payments or purchases.
“We’re researching key indicators of deep fakes,” explains Kulikowski. “For example, to deter email spoofing, we have insureds look to identify any issues with the text. It could be the incorrect spellings of the domain names or the return web addresses, or other grammatical errors. To avoid email spoofing instances, insureds should train their employees to identify misspellings and verify the email they receive is from a known or correct sender. However, deep-fake AI use in a Teams meeting invite is more difficult to identify.”
‘We’re constantly evolving our approach’
While AI-led tech can be used to create elaborate swindles, pose as CEOs, and con employees into parting with information or money, it can never fully mimic a human. And it often takes a human to spot the lack of personability in the AI – whether that’s a slightly off tone, an illogical response, or rapidly varying speaking volumes. Which begs the question, Could organizations eventually leverage AI to fight AI crime?
“I think that’s something which may be possible in the future,” says Kulikowski. “But then the criminal-purposed AI would try and find a way to trick the AI that’s identifying it.
It’s a never-ending cycle of adopting new strategies to defeat emergent criminal tactics.”
One area where Westfield Specialty is taking proactive measures is in the underwriting process – revising policy language to help mitigate potential cyber threats.
“We’re constantly evolving our approach,” Kulikowski tells IB. “Sometimes it feels like we’re making changes on a daily basis. We started using AI in a defensive mode to help identify and prevent the increasingly sophisticated AI attacks. However, much more development is needed in this space. I still think that legitimate AI users who streamline business processes and/or identify and reduce potential risks lag behind the threat actors who employ AI, but increasingly more commercially available tools are coming to market that can even the playing field.”
What remains difficult with AI is that, when a firm underwrites a client, most of the technology they use is embedded within their own products – they’re not really a standalone offering.
“It’s not like you can just ask for a brief AI questionnaire from cyber insureds to fully understand their potential exposures,” quips Kulikowski. “To get the information we need, we ask more detailed questions now around all aspects of a client’s utilization of AI, what processes are ingrained, what the insured’s attention level is to developing regulations regarding use cases and privacy, how they vet vendors that utilize AI technology to deliver services or products, and how PII and other protected or confidential data is addressed in vendor contracts.”
Because from a coverage standpoint, every carrier has their own unique approach to these risks, many of which are still being developed – some that offer direct coverage for AI-related risks, others that directly exclude it, while others assume coverage based on existing language. Here, what Kulikowski says is difficult for clients right now is a lack of consistent US regulation around AI use.
‘Employee education is the best weapon against AI-driven threats’
“State regulators all across the country are currently developing inconsistent guidance,” he says. “As a carrier, Westfield Specialty has to remain nimble in this changing AI regulatory landscape to understand how such changes impact coverages. I think there’s going to be a lot of changes this year from a regulatory standpoint, both abroad and here in the US. We are committed to responding to the new regulatory environment to enable insureds to obtain the cyber coverages they need.”
But even as the regulation of AI matures, cybercrime continues to present challenges. The crux of the successful fight against online scams comes down to one core factor – your people. Employees as the first line of defense against cyberattacks need to be fully engaged to thwart any potential dangers.
“Employee education is the best weapon against AI-driven threats,” says Kulikowski. “We advise every current and prospective client to prioritize the training of their employees, to be aware and look for indicators of fraud. Employees really are an organization’s gatekeepers – each day they choose who does and who does not communicate with the company.”
So, whether it’s a suspicious-looking communication, a ransomware threat, or a phishing email, Kulikowski is clear – teach your employees to pass that information on to the relevant internal department.
“I’ve never heard of an IT team complaining that they receive too many suspected scam or phishing email notifications from employees,” he says. “They want that information. They’re so appreciative when employees are over-diligent. It’s really up to our insured clients to incentivize their employees to aggressively combat cyber fraud and risk. They need to understand the costs and potentially negative impacts of ransomware attacks, and the company’s holistic approach to preventing or defeating these incursions. [After all], you don’t want to be the person responsible for the loss – it’s more than a little embarrassing if you’re the one buying hundreds or thousands of dollars of Amazon gift cards on behalf of your president.
“[At the end of the day], it’s the responsibility of all employees to help reduce AI-enhanced cyber risks; it’s the responsibility of the organization to educate them about how serious such threats can be and how to evolve to help mitigate the ever-changing threat landscape.”
The worrying rise of cybercrime
in US organizations
data breaches
malware infections
SaaS and cloud-related breaches
phishing schemes
insider activity
(30%)
(29%)
(28%)
(28%)
(28%)
The most common attacks were
90% of IT professionals say their organization experienced a cyberattack in 2024
20% of these professionals say they’ve experienced at least 12
Source: Rubrik Zero Labs
AI in insurance gains support from brokers
of insurance professionals believe AI can shorten the time required to reach a customer service representative
73.8%
cite potential gains in operational performance
71.5%
agree AI may outperform humans in pattern detection
71.2%
report being satisfied or very satisfied with chatbot interactions
74.5%
Source: GlobalData
