Organisations are increasingly reaping the benefits of the fluid sharing of information: Applied to combined data sets, artificial intelligence (AI) or “machine learning” technologies create valuable insights by cleaning, modifying and adapting information. But there are some general rules that need to be considered in this process.
AI aggregates data and creates new intelligence and the benefits are tangible:
• a retailer using insights to improve customer experience and personalisation
• an organisation analysing emissions to assist in the reduction of its carbon footprint
• a healthcare service provider developing new software tools based on new data insights, to increase the impact of health services in developing countries
• predictive analysis of transport patterns to reduce congestion and improve public transport services, guiding policy decisions
• the collection of pandemic data to identify the effects of lockdown on children’s wellbeing, prompting future safeguarding
• data-sharing between public authorities and supermarkets regarding prioritising food delivery slots during the pandemic
• the analysis of housing needs across the country based on affordability and demand.
The ethical and transparent management and protection of digital assets is critical to today’s businesses. The public needs to be able to accept the use of data, knowing that it is being used fairly, avoiding harmful impacts. Customers are far more inclined to share data in such circumstances, thereby enabling the benefits of data sharing to be fully realised.
Historically, private data has not always been well-managed. This knowledge provokes strong reactions in people: It is quite reasonable for parties to expect that others are ethical with their information. For organisations that embrace the ethical aspects of data use, this can serve as a powerful differentiator. “Trust” is a critical part of this process and an asset of equal importance as the data itself.
Also relevant to the concept of trust is that parties using the data have a right to know that the data is accurate and can be trusted, both in form and content. It has been estimated by IBM that data quality issues costs $3.1trillion per year in the US alone. The A-levels grade debacle that hit the UK over summer 2020 (whereby a poorly designed algorithm downgraded around 40 % of predicted results) considerably undermined the public’s trust. The outcry was tangible, and many would say, entirely warranted. (Point of order: algorithms are inherently morally neutral. This algorithm performed exactly as it was designed. Whilst algorithms take humans out of the loop in their operation phase, it is entirely possible for an algorithm to be poorly designed at the outset).
It may be useful to mention topical “data trusts” at this point. Whilst there seems to be some inconsistency around the meaning of this term, one thing appears to be universally accepted. Rather than a “legal” trust (but nonetheless an organisation managing data in some way), the term “data trust” encapsulates the very concept of trust: in the collection, maintenance, sharing of and access to data. The way data is used should generate trust - about the information itself and the data trust’s activities – thereby reinforcing the ethical principles. Any consideration of ethics and trust starts with an analysis of how data is collected, shared, and used. Business tools are available to assist with the management of this process.
Global privacy laws continue to emerge and mature, providing legal and regulatory frameworks to support these ethical considerations. Carefully managed processes are fundamental to ensure ongoing compliance with applicable data protection principles. Understanding the full extent of these laws can be complex: The rules themselves are often complicated but the implications will be compounded by the various data parties involved and the nature of the data itself. There are many different types of data users who collect, store, manage, analyse, modify, or otherwise use data in various ways. A full analysis of potential legal liability against the legal framework is critical.
Consider the following:
1. Does the data institution “touch” the raw data or simply extrapolate information that is provided to them in an anonymised format?
2. What agreements / contractual obligations / confidentiality provisions are in place with respect to data?
3. How was the raw data obtained: Is the data recipient confident that it has received the data ethically?
4. What actions have been taken to reduce the risk of re-identification?
5. Is data “anonymised” or “pseudonymised” with a view to it being not reidentifiable?
6. Is it possible to de-anonymise the data? What are the risks of this happening?
Ransomware and the effect on data privacy
Another data threat continues to loom large…an external threat to the safety of a business’s digital assets. As we’ve come to expect with any valuable asset, criminals are never far away. Data is no exception. The opportunities to exploit this precious resource are plentiful in cyber-space, where criminals lurk around the dark corners of the internet.
Ransomware is often the weapon of choice. Recent years have seen a huge surge in ransomware attacks, exacerbated by the pandemic where network vulnerabilities were more exposed with businesses operating remotely. Earlier forms of ransomware involved encrypting systems and offering a decryption key in exchange for a paid ransom demand. Some businesses were able to side-step such threats due to the ability to reinstate digital assets from back-ups (and therefore avoid payment of any ransom). This led to the “two-pronged” attack. Cyber-criminals deploying ransomware variants such as Ryuk and Maze often exfiltrate data as well, demanding payment of a second ransom, without which data will be released into the public domain. If the data in question has political, economic or environmental significance, cyber-terrorists may show particular interest, perhaps installing malware to monitor data, or threatening the release of that data, for commercial or political gain.
Whilst it is outside the scope of this paper to list the full range of mitigating measures available to organisations, at the very least a bespoke risk assessment, including cyber risk identification, analysis, and potential transfer (by way of standalone cyber / privacy insurance) would be advisable.
Appropriate “data stewardship” is vital to ensure that businesses operate above the law and reduce cyber risks, particularly as they relate to personal data, and the protection, use of and access to that data. Reputations are made and broken in this arena: the commercial sensitivities ought not to be underestimated.