Over the past few years, the world has seen the introduction of two major data protection regulations: the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). The GDPR, which affects the European Union, has been in effect since May 2018, which is nearly two and a half years. The CCPA went into effect on Jan. 1, 2020. But what impacts have these major privacy regulations had on the industry?
One year after the GDPR had gone into effect, we took a look at what had happened in that first year. It turned out that enforcement had been slow to start, with compliance and enforcement of the law being a bit lax. As of February 2019, nine months after the law took effect, only 91 fines had been issued, and most of them were small fines. One of the major fines at the time had been for Google, who was being fined €50 million (US$56 million) “for lack of transparency, inadequate information and lack of valid consent regarding the ads personalization,” according to the European Data Protection Board.
Enforcement seems to have picked up since then. As of August 2020, there have been 347 fines issued, totalling €175,944,866 ($208,812,246 USD), Privacy Affairs’ GDPR tracker shows. The largest fine to date is still the one that was issued to Google in 2019. The smallest fine issued was €90 ($106 USD), and it was to a hospital in Hungary. Upcoming fines not included in the tracker include those for Marriott and British Airways, which are still in the proposal stage.
RELATED CONTENT:
CCPA set to take effect at the start of 2020
GDPR one year later: Slow compliance, lax enforcement
The CCPA went into effect in January, and California started enforcement July 1. As of late August, no fines have been issued yet.
“For GDPR it took almost one year before the bigger fines started taking effect. Because of the fact that CCPA went into a stretch period with COVID, it was a kind of silent launch. In the next six months we will see more and more of the people, the activists trying to enact their rights and we will see more of the effects of this regulation,” said Jean-Michel Franco, director of product marketing for Talend, a data integration and quality platform provider.
GDPR and CCPA’s impacts
Both of these laws have had major impacts on how organizations handle the privacy of their data. According to a 2019 survey from LinkedIn on the most popular jobs, companies are investing in data privacy roles. This is especially true in Europe; Data Protection Officer was the number one fastest growing job in France, Italy, the Netherlands and Sweden. “What we see is that more and more companies are being conscious that they need to dedicate people and efforts on that,” said Franco.
There are a number of methods companies are using to improve data privacy. For example, data mapping is being used to make sure that whenever a request for removal from a user comes in, the company can make sure that all of the data is being aggregated properly, Franco explained. Another method companies are using is data anonymization. According to Franco, companies are realizing that managing personal data everywhere is costly and risky, so they determine which systems need that personal data and which systems would be functional with anonymized data.
Another side effect to privacy regulations like CCPA and GDPR is that companies are being smarter about how they take advantage of data. These regulations have caused a shift from ‘opt-out’ consent historically used by marketers to an ‘opt-in’ approach, email marketing company SaleCycle explained in its Email Marketing in a Privacy Conscious World report. According to the report, 32% of marketers now require explicit consent for email marketing, 26% have introduced a stricter opt-in process, and 21% have implemented checkboxes.
“And so all of those companies, they implemented a program so that they reclaim customer opt-in, and then they also manage a little differently the marketing problem so that customers are more engaged,” said Franco. “So overall we see benefits. There are statistics that show that the typical return rates and ROI from a marketing standpoint have improved in Europe because of that. Even if there might be fewer customers that are targeted for each campaign because names went out of the database, the ones that remain are more engaged and the company makes more efforts to really customize the message, to [avoid] bombarding the customer with emails that are not targeted.”
According to Franco, increased data governance has made it easier for companies to start up new projects as well. “I’ve seen a couple of customers that say now that I’ve implemented my GDPR project, I have a single place where all my employee data is described. And so this is not only useful for privacy. This is useful because when I want to launch a new project on HR, I directly know where all of my data is, the data is documented, and there are rules to get access to that.”
Privacy in the UK after Brexit
In the time since the GDPR went into effect, the UK officially left the European Union. It currently still operates following the rules of the GDPR, but after December 31, 2020 it will adopt its own regulation, according to an ebook from Zivver, a company that specializes in secure communication. The new regulation, known as the “UK GDPR,” will mirror the GDPR as well as contain new additions.
“This would be for practical reasons and to help ensure an adequacy decision with the EU, minimising any impact on cross border data exchange between the regions in 2021 and beyond,” Zivver wrote in its whitepaper
It is anticipated to go into effect in 2021, Zivver added.
Privacy practices and breaches
Osano recently released a report that examined the relationship between poor privacy practices and data breaches. It evaluated 11,000 of the top websites, based on Alexa Internet rankings.
It found that 2.77% of websites have reported a data breach in the last 15 years. Unsurprisingly, websites that had poor data privacy practices were more likely to experience a data breach. It gave each website a ranking: top quartile, second quartile, third quartile, and bottom quartile. Websites in the top quartile have made proactive efforts to be transparent about data practices, while websites in the bottom quartile have “extremely outdated privacy notices or no privacy notice at all.” Websites in the top quartile had a 1.86% chance of suffering a data breach; websites in the bottom quartile had a 3.36% chance.
In addition to just being more likely to be breached, organizations in the bottom quartile suffer more severe data loss when breached. Organizations in the top, second, and third quartiles lose an average of 7.7 million records. When an organization in the bottom quartile suffers a breach, it loses an average of 54.4 million records — a 7x increase.
What’s next?
Going forward, Franco predicts that AI ethics will be the next big aspect of data that starts to see more regulation. “I use the analogy with life science. When a new medicine comes into the market, you have to do some trials,” said Franco. “You have to prove that the new thing doesn’t have some adverse effects, and this clinical trial is heavily regulated. So it looks like the European Union is taking this kind of direction for AI and for making sure that the AI algorithms, when they apply to a person, run in a fair way and in a controlled and ethical way … That is pretty interesting and probably the next step because we see more and more of the power of data that can automatically recognize people with their face and everything and that can automatically trigger some decision. And so this is becoming a big thing in terms of how a company must govern the way that the data is used to automate decisions.”
In February, the EU had announced new objectives to shape its digital future, one of which was more strictly controlling its use of AI. “Clear rules need to address high-risk AI systems without putting too much burden on less risky ones. Strict EU rules for consumer protection, to address unfair commercial practices and to protect personal data and privacy, continue to apply. For high-risk cases, such as in health, policing, or transport, AI systems should be transparent, traceable and guarantee human oversight. Authorities should be able to test and certify the data used by algorithms as they check cosmetics, cars or toys. Unbiased data is needed to train high-risk systems to perform properly, and to ensure respect of fundamental rights, in particular non-discrimination,” the European Commission wrote in a statement.
In the United States, there is also a strong need for stricter regulation on the use of personal data by AI, particularly when used for applications like facial recognition. Facial recognition technology is currently used by government agencies in the U.S., including DMVs in several states, airports, and by police, Recode reports. Tech giants like Microsoft have long been vocal in their support for stronger regulations to avoid misuse by governments and companies, and Washington state (where Microsoft is headquartered) passed facial recognition regulations earlier this year. Washington’s law is the first facial recognition law in the US that includes protections for civil liberties and human rights.
EU-US Privacy Shield decision invalidated
On July 16, 2020, the EU-US Privacy Shield was deemed invalid by the Court of Justice of the European Union, essentially the EU’s Supreme Court. The EU-US Privacy Shield “protects the fundamental rights of anyone in the EU whose personal data is transferred to the United States for commercial purposes. It allows the free transfer of data to companies that are certified in the US under the Privacy Shield,” the European Commission stated. The framework includes strong data protection obligations, safeguards on U.S. government access to data, effective protection and redress for individuals, and an annual joint review by EU and U.S. officials to monitor the arrangement.
The EU-US Privacy Shield was created as a result of a complaint from Austrian national
Maximilian Schrems. Some of Schrems’ data was transferred by Facebook Ireland to Facebook servers in the United States to undergo processing. In his complaint, Schrems claimed that he did not believe the United States offered sufficient protection for his data against public authorities.
This complaint was rejected by the High Court in Ireland in 2015, but afterwards, the Irish supervisory authority asked Schrems to reformulate his complaint. In the new complaint, he claimed that “the United States does not offer sufficient protection of data transferred to that country.” He also requested suspension of future transfers of his data to the United States. When brought to court this time, Decision 2016/1250 was adopted, otherwise known as the EU-US Privacy Shield.
In invalidating the decision, the court claims that “the limitations on the protection of personal data arising from the domestic law of the United States on the access and use by US public authorities of such data transferred from the European Union to that third country, which the Commission assessed in Decision 2016/1250, are not circumscribed in a way that satisfies requirements that are essentially equivalent to those required under EU law, by the principle of proportionality, in so far as the surveillance programmes based on those provisions are not limited to what is strictly necessary.”