LTH Insights/SHADOW IT IN THE GENAI ERA

Shadow IT in the GenAI Era

logo
 

Nearly as long as we’ve had a tech-enabled legal profession, we’ve had a problem with shadow IT. For those unfamiliar with the term, shadow IT is employees’ use of technology for work without the knowledge or approval of their employer. 

Every time there’s a shiny new object in the tech world that seems like it would be useful for day-to-day work tasks, someone somewhere will inevitably use it on the side, either without out asking or after asking and being met with a delayed or negative response. 

It should come as a zero surprise that GenAI has had a massive shadow IT effect across organizations in all industries, including legal. We saw it with the launch of ChatGPT in late 2022 – not just in terms of employees using it, but also in terms of employers knowing employees would want to use it and therefore implementing proactive bans (and eventually more comprehensive usage policies) on the technology. 

Despite those early efforts, rogue use of GenAI persists – often on personal devices but still for official work – whether everyone wants to call it shadow IT or not. Perhaps it’s part of the broader GenAI effect, but people seem oddly averse to just calling this “shadow IT,” instead wanting to somehow add AI to the name of this not at all new phenomenon.   

But call it what you will – BYOAI, shadow AI, AI smuggling – the unsanctioned use of technology is a serious issue with significant risk implications for legal organizations, potentially higher than with other shadow IT. 

The 2024 Work Trend Index Annual Report from Microsoft and LinkedIn reported on what they called “BYOIA” (bring your own AI), noting that 73% or more of people from all generations, from Gen Z to Boomers and beyond, were using AI tools at work that were not provided by their organization, with the percentages increasing with each younger generation. Overall, 78% of AI users were engaging in BYOAI, with the rate rising to 80% at small and medium-sized organizations. 

The motivations reported for the rogue GenAI use were mixed, with respondents citing recurring themes of overwork and burnout as reasons for using AI, and the desire to not seem replaceable as a reason for hiding the use of the AI. 

A recent BBC article citing the Microsoft/LinkedIn report took a closer look at why employees, particularly knowledge workers, “smuggle AI to work.” Some of the quotes from anonymous employees are illuminating: 

 

  • "It's easier to get forgiveness than permission. Just get on with it. And if you get in trouble later, then clear it up." 

  • “His unauthorised use isn't violating a policy, it's just easier than risking a lengthy approvals process, one worker says. ‘I'm too lazy and well paid to chase up the expenses,’" he adds.” 

 

These attitudes show that employees might not always understand why employers are monitoring or restricting GenAI use, seeing it instead as merely an administrative consideration or a power grab. The latter can be seen in this employee’s quote in the article: “He's not sure why the company has banned external AI. ‘I think it's a control thing,’ he says. ‘Companies want to have a say in what tools their employees use. It's a new frontier of IT and they just want to be conservative.’" 

What this drives home to me is that organizations need to ramp up their focus on GenAI education for their employees, particularly in legal, where the potential risks of using the wrong AI tool are even greater than in other industries. GenAI usage polices are not (or should not be) about control. They’re about critical issues like security posture and protecting confidential, privileged, and sensitive information.  

That’s why some are using the term “shadow AI” instead of just “shadow IT,” at least according to IBM’s explanation. GenAI introduces additional kinds of risk related to data management and more that have not necessarily been a concern with shadow IT in the past. 

Regardless of what you want to call it, the workforce seems intent on reaping the benefits of GenAI, whether or not their employers know or approve.

Now is the time for legal organizations to increase their focus on understanding exactly what AI tools their employees are using and why. Doing so will not only help mitigate risk, it will help you understand your employees’ biggest pain points and identify areas where you might want to invest in approved tools. 

 

What I’m Watching: 

 

About-face in the TR/ROSS Intelligence trial: Today, Judge Bibas revised his 2023 summary judgment opinion and granted partial summary judgment for TR, finding that ROSS infringed TR’s copyrights in 2,243 Westlaw headnotes in training its AI platform . He also rejected the defense of fair use. “A smart man knows when he is right; a wise man knows when he is wrong. Wisdom does not always find me, so I try to embrace it when it does – even if it comes late, as it did here,” Judge Bibas wrote. You can read the full Memorandum Opinion here. 

 

The finalists are in: The official list of 15 finalists who will deliver pitches in the annual Startup Alley competition at ABA TECHSHOW has been announced. Read more about the presenting startups here. 

 

Doubling down on Canada: Canadian-based practice management provider Clio announced an even stronger focus on the Canadian legal market going forward. As part of that focus, the company appointed Luke Slan as its new General Manager, Canada, and also announced plans for expanding its team and creating dedicated teams across product, sales, and marketing to provide more tailored support for Canadian law firms. 

 

AI for billing compliance: Elite, the provider of financial management and business operations solutions, announced the launch of Elite Validate, a new AI-powered billing compliance solution. The goal is to transform how law firms manage adherence to Outside Counsel Guidelines. 

 

Denmark joins the fundraising trend: Danish legal tech platform Pandektes announced a €2.9M Seed round with participation from People Ventures, German fund Interface Capital, and Nordic business angels. The company plans to use the funds to expand its legal research platform to include legal sources from across the EU. 

 

OpenAI branding: ICYMI, OpenAI released new brand design guidelines, including rules on the proper use of the “blossom” logo and how its name in the font OpenAI sans should be used. Read more here.  

 

Super Bowl ad breakdown: I don’t know about you, but I expected to see more ads from AI companies during the Super Bowl. We got AI commercials from usual suspects like Salesforce and GoDaddy, but the lineup was surprisingly sparse. The most compelling spot (in my opinion) was for ChatGPT. 

 

 

Editor’s Note: This is the latest installment of my weekly Tuesday column on recent developments in legal tech and AI that have caught my attention. You can find the previous column here. If you have news or stories that you’d like to see featured in a future column, please contact me at press@legaltechnologyhub.com. If you’d like to get this column and other industry analysis in your inbox every week, sign up for a free LTH Insights Newsletter here.

Search Legaltech Jobs
Legaltech Jobs provides targeted job listings for alternative careers in law, including roles in legal technology, legal data, legal operations, legal design, and legal innovation. Click and browse to find your next opportunity!
Search Now

Loading...