The fact remains that
in all spheres, be they professional or personal, a number of sobering safety
and security concerns remain around the widespread adoption and use of
artificial intelligence.
As part of their recent exploration of AI governance and
the developing UK AI Act, a team of data protection specialists explore recent
concerns around Microsoft's AI-powered chatbot, Copilot.
Microsoft's decision to add a dedicated Copilot key to new keyboards and
laptops is a clear indication of their faith in the AI chatbot. These upgraded
PCs will likely roll out from February 2025. However, Copilot has been in the
spotlight recently with debates surrounding the way AI is reshaping SaaS
(Software as a Service) platforms. Integrated into Microsoft products like
Office 365, Copilot uses generative AI to automate tasks, generate content and
analyse data, offering businesses new ways to streamline operations and boost
productivity.
Beyond home use, the potential for Copilot to cut down on time-consuming tasks
such as document drafting, data analysis, and general administration presents
an attractive allure for any business. In the same breath, however, the
technology's rapid adoption across multiple industry sectors has sparked debate
and concerns around data privacy and compliance.
Before we dive into the body of current conversation around these concerns (and
more importantly, how you can mitigate risk), it's important to outline where
these concerns have stemmed from, and why the controversy has surfaced.
Understanding Copilot's benefits
Copilot works by accessing data that already exists within an organisation's
Microsoft environment to train and refine its capabilities.
In Word, for instance, documents can be drafted based on prompts and edits from
Copilot, and refinements can be made through AI suggestions.
When tackling an Excel spreadsheet, the AI tool can analyse trends, generate
summaries and build visualisations of large datasets in a matter of moments.
Presentation slides can be auto-generated, emails and meetings summarised and
task lists can be curated from a range of sources of information. There's no
onboarding, either, since Copilot is built into Microsoft products and
integrates seamlessly with existing systems.
What's more, from a security perspective, Microsoft's acquisition of a license
from OpenAI means the company processes data in its own, secure environment -
all while assuring us that it won't use customer data for its own training
purposes. Crucially, any existing security permissions will be respected by the
platform, and additional safeguards have been put in place for EU consumers to
comply with the EU Data Boundary.
Assessing the potential benefits at surface level, it's no wonder professionals
are keen to get their hands on technology such as this.
Copilot sounds like a
dream, but while the AI platform certainly offers several advantages, it
remains a double-edged sword, with certain privacy risks that need careful
consideration.
Beneath the shine
Peeling back the layers
of Copilot reveals a number of "weak points"
businesses should be aware of at this time, with four primary concerns that
we'll focus on in this article.
Permission management gaps
One of the most significant risks with Microsoft Copilot relates to poor
permission management. Many organisations struggle to maintain tight control
over user permissions (according to Microsoft's 2023 State of Cloud Permissions
Report, less than 1% of granted permissions are actually used), and this can
result in sensitive data being exposed to individuals who should not have
access. While this would be a concern even without the inclusion of artificial
intelligence, Copilot may inadvertently elevate the possibility of sensitive
data falling into the wrong hands.
Thankfully, the buffer for this is fairly straightforward: businesses should
uphold strict, role-based access controls and conduct regular reviews of
permissions, making sure data is only accessed by authorised users
Risk of data repurposing
Another concern is the potential for data repurposing. Copilot could
inadvertently use data that was collected for a specific and lawful purpose and
use it for another incompatible reason.
For example, if you use personal data in a data set to improve Copilot's AI
(i.e. training it to perform better), it could create a compliance issue if the
data wasn't collected for that specific purpose. Similarly, there's the
potential for employees to use data collected via Copilot for purposes that are
not within the original agreements.
Robust training is an important part of AI adoption, and staff should receive
regular training and refresher courses on the appropriate use of Copilot,
emphasising the importance of adhering to the original purpose of data
collection and the risks of repurposing data.
While it's important to educate staff on the capabilities of artificial
intelligence systems, it's equally critical that you inform staff of tasks that
Copilot (or any other AI system) may not be suitable for. For example, using
Copilot to compare CVs with job specifications could involve automated
decision-making, which requires proper safeguards to ensure fairness and
compliance.
Bias Risks
In cases where Copilot is used to analyse large-scale data, there's a risk
that, if left unchecked, it could amplify any implicit biases contained within
that data, leading to inaccurate or unfair outcomes.
Beyond inaccurate readings of current data, there's also the potential for
Copilot to use old or outdated information as the basis of its decision making
- which may return skewed or inaccurate information.
Further, if users rely too heavily on the output, without "sense-checking" or
questioning the results, it increases the risk of inaccuracies and
misinformation.
Regular reviews of your data retention policies, along with dataset audits,
should be conducted to identify any biases in historical data or outdated
information. Anyone using Copilot should be encouraged to validate outputs by cross-checking with reliable and reputable sources.
Copilot is not a single product, but a suite of products governed by different sets of terms. Where Copilot can't answer a query internally, it will use Microsoft's search engine, Bing, to search the web and summarise results back to the user. This means the search term used will leave the security of your tenancy and is governed by a different, less stringent set of terms.
This may not be overly problematic in most cases, but for highly regulated industries, or when using highly sensitive data, this could result in a regulatory violation or reputational risk to an organisation.
Thankfully, Copilot settings can be configured to limit or block external searches, reducing the risk of sensitive data leaving your tenancy. You may also endeavour to establish clear privacy policies that prohibit the use of external queries for sensitive or regulated data.
Classifying and mitigating risk
A successful Copilot rollout involves more than simply adopting the tool. To classify and mitigate the risks, it's important to regularly review your data management and retention procedures, ensuring they align with your objectives and comply with data protection regulations.
Here's a shortlist of useful reminders for organisations operating in the EU or UK under the GDPR:
Conducting a DPIA
If you're using Microsoft Copilot for large-scale processing of personal data or sensitive data, you'll likely need to carry out a Data Protection Impact Assessment (DPIA). A DPIA is mandatory under the GDPR when data processing is likely to result in a high risk to individuals' rights and freedoms. A well-completed assessment will aid in identifying and mitigating risks, allowing you to assess the benefits of Copilot and bring peace of mind.
Performing an LIA
If you're using Legitimate Interests as the lawful basis for Copilot's business use or data access for training and testing, you must also complete a Legitimate Interests Assessment (LIA).
This will be informed by your DPIA, and must show that your data processing is not unreasonably intrusive. If you are processing special category data, you must ensure you have specific justification as this type of data has stricter processing requirements.
Updating your RoPA
To maintain GDPR compliance, you must also update your Record of Processing Activities (RoPA) to include Copilot's data use. If you allow users to personalise their use of Copilot, ensure that the data usage is tracked, recorded, and assessed for legal compliance.
You must also clearly articulate how Copilot is used to update your Privacy Notices. This ensures you can properly handle and respond to data subject rights requests such as Data Subject Access Requests
(DSARs) or the Right to Object.
Closing comments
It's clear that Microsoft Copilot has the potential to provide real-world benefits for the future of business, driving efficiency and providing companies with distinct operational advantages. However, as its use expands, and without proper oversight, Copilot could expose organisations - and their consumers - to certain privacy risks. Sense and sensibility must prevail, and professionals would do well to tread these waters with a sense of quiet caution.
Moving forward, businesses making use of Copilot (and indeed, other AI
tools) need to have effective data governance structures in place and follow data protection best practices, not only for the sake of compliance, but also as a means to maximise the tool's potential while safeguarding personal information. In all, the presence of clear guidelines for data use, access, and retention will enable businesses to mitigate risks, meaning they can take full advantage of Copilot's capabilities without worrying about falling foul of compliance requirements.