Announcing Asset Intelligence and Data Security Posture Management at RSAC 2024

Join us at booth 4209!
Blog Post

Ridiculously Good Advice for Engineers and Lawyers to Communicate

Jan 12, 2023

The 1960s ushered in an era of generational change in America when the indecipherable vocabulary of the youth resulted in the rise of the “communication gap” with adults. The period also heralded the start of great technological change.

The beginning of the decade saw the first robot put to use in a factory, and the first commercial video game, “Pong,” was released for holiday shoppers in the mid-1960s. At the end of the decade, in 1969, the first message was sent between two computers: Programmers attempted to send the word “login,” … but the computer crashed after the “o” was typed.

In some ways, this inauspicious yet exhilarating start of the technology era mirrors the relationship – and a new communication gap – between two key stakeholders that have been driving the response to new data privacy laws and the policies to support these technological developments ever since: lawyers and engineers.

The Precision of Language and Power of Alignment

According to a leading technology consulting group, by the end of 2024, modern privacy regulations will cover 75 percent of the world’s population. As of mid-2022, there were more than 100 privacy laws worldwide, with many things in common, but no two were exactly alike. As new privacy laws are enacted, privacy policies and data practices are evolving to accommodate these additional or updated regulations. Since the EU enacted its General Data Protection Regulation (GDPR), the average word count of privacy policies has increased by 25 percent. In fact, the privacy policy for one of the most popular global apps has grown over the past 15 years from 3,000 to 12,000 words – nearly three times as long as the U.S. Constitution.

Engineers are the epicenter of building new services and programs leveraging data and machine learning on the ever-expanding nucleus of innovation. The lawyers lead and innovate in developing privacy laws, policies, and data processing guidelines. In the world we live in today, this represents the inflection point of the current communication gap. Lawyers and engineers may see the exact words, but they interpret them differently because they speak different languages and use different semantics.

On many levels, these two groups of professionals share some similar traits. Both are analytical and think rationally and critically. But ask an engineer what 1+1 equals, and they’ll immediately tell you “2.” Ask the same of a lawyer, and first, they’ll want to know why you’re asking, then they’ll consider the risk of responding, and finally, the options for responding. Engineers typically engage in rules-based, deterministic, and probabilistic but predictable thinking. They thrive with structure and process and see most things as a problem to be solved. On the other hand, lawyers navigate a minefield of nuance and ambiguity before they end up with a solution – or any number of possible solutions – or it depends.

Unfortunately, for both lawyers and engineers, data privacy is not a binary concept. Privacy laws and policies are like a candy box full of ambiguities for lawyers, with words like “promptly,” “reasonable,” and “adequate performance.” How, then, can an engineer, accustomed to building systems that require precision, create a structure to solve the problem the lawyer sets up? The task of operationalizing privacy starts to seem insurmountable. The solution begins with mutual understanding and breaking away the layers of abstractions in each direction.

To help bridge the communication gap, we’ve provided a brief guide of 12 common terms in privacy to demonstrate how lawyers and engineers think.

Term What the Lawyer Thinks What the Engineer Thinks
Processing Activity An operation or set of operations taken on personal data to achieve a business purpose or intent. Business transactions or API execution paths in systems. The processes (I/Os, CPU Cycles, Network Traffic, Memory Usage, etc.) that are invoked by a set of related programs/scripts/queries on computer systems to perform some functionality.
Suitable Safeguards Enforceable measures, policies, or procedures for transfers of personal data. Mostly technical measures such as data center protection, both physical and network-based. Also, design criteria that implement “security by design” principles, encryption at rest and motion, etc.
Security Measures A control or a security measure put in place depending on the nature of the personal data it processes and the risks associated with that processing. Policies, procedures, training / certification / background checks, encryption, design, development, and deployment policies needed to ensure that an application stack and the computers that run it are secured.
Lawful Basis Proper legal grounds on which an organization is allowed to collect and process personal data. The reason we’re collecting this personal data.
Data Types Specific types of data are considered to be personal data. Typically higher level constructs in context of the business or the application. Data structures – characters, strings, booleans, integers, floating point numbers, images, video, etc. OR API schemas endpoint or encapsulating data models or representing a specific business transaction.
Data Processing Any operation performed on specifically personal data, such as collecting, recording, organizing, structuring, storing, adapting, altering, retrieving, consulting, using, disclosing by transmission, disseminating, or making available, aligning or combining, restricting, erasing, or destroying it. Activities defined by software / scripts / queries and run on computers to gather and manipulate data to achieve the desired end-state. Typically processing in the context of compute systems or data pipeline infrastructure.
Subprocessors Risk Data usage, breach, and supply chain risk that arises from engaging a third-party data processor who has or will have access to, or who will, process personal data. The risk that arises from adding new tools that may have access to the personal data we’re collecting.
Data Subjects Natural persons who can be directly or indirectly identified by their personal data that is being processed with specific context of their persona. The individual users whose personal data we are collecting.
Data Sharing Disclosing personal data either within different parts of the organization holding the personal data or disclosing personal data to third parties. One person or group or system exchanges data with another person or group or system a set of data via network, drive, code, or verbally.
Data Retention The storing of personal data for either a specified period or only as long as needed to achieve the purpose for which the personal data was collected. The time period that you or your program holds data before purging it forever, including storage or disk backups.
Data Minimization Limiting the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose. The concept of gathering the least amount of data for the task being addressed. Reducing the feature set needed to train machine-learning algorithms.
Purpose Limitation Do not use personal data collected for more than the required business purpose. The desire to use data for a specific purpose, but due to the way it was gathered, knowledge of the source, how it is now represented or aggregated, or other technical issues, it cannot be used in the desired manner.

Crossing the Bridge to Understanding and Better Product Building

As the guide above shows, bridging the communication gap will take work on both sides. In some cases, the same language produced somewhat similar definitions, while in other cases, the definitions seemed to come from two different dictionaries; and in a couple of cases, there was no understanding of the terminology at all.

The lesson here: To build understanding - and a better product or program - lawyers and engineers need to come together early in the process. If they can’t speak the same language, at least they can cross the communication barrier to reach an understanding of key terms and concepts before the hard work begins.

The solution: The best way to bridge the gap is to navigate the different levels of abstraction. On one side, engineers need to raise their level of abstraction, such as talking about what a product, or component of a product that is part of a logical group, does with regard to personal data processing. Engineers can ask themselves, “What business or user value does it provide? What product does it map to? What are the legal implications of the legal documents like MSA and DPA we sign with our customers and vendors?” On the other side, lawyers need to lower their level of abstraction, moving beyond the legal aspects and understanding that a product is built on infrastructure, code, and micro-services that may be shared or isolated. Lawyers build careers on asking questions, so ask the right questions to surface and assess the risks.

As a common example, let’s look at Data Analytics as a processing activity at an e-commerce SaaS business where both engineers and lawyers think and communicate with an altered or modified sense of abstraction on each side.

To help correctly identify all Data Analytics processing activities, the lawyer may ask the question with a slightly lower level of abstraction:

Core product services, trends analysis, data science workflows, business intelligence type activities performed in each department of the business that constitute both short to long term personal data collection, storage, and statistical analysis on that data. What specific business outcomes and nature of processing correlated to data analysis workloads in our product and services.

On the engineering side (even though grounding the thinking in core infrastructure, code, and services they build), they may raise the level of abstraction to answer the question by producing:

Product-wide catalog of data collection sources and sinks, ETL jobs, systems or tools uses, producers and consumers data maps used for statistical analysis. Logically grouping with business purpose (instead of infrastructural grouping even though derived from the same source of information) of data analytics type workloads, reports, or metrics organized by teams or functional areas.

Similarly, on the topic of Purpose Limitation for the Data Analytics processing activity, the conversation may rather go as follows:

Privacy lawyer: We need to make sure we have obtained the appropriate consent from individuals for that processing. The purpose for processing of personal data and value provided to our users should be well-known. Individuals whose data we are processing must be informed.

Data engineer: I see. We do have consent from the individuals whose data we use for data analytics in our data science and ETL pipelines. It is embedded in our product sign-up workflow, although I am unsure about the data analytics the customer success team performs on the user logged-in session. That data doesn’t flow into our data warehouse. We use a third-party product for that.

Privacy lawyer: That's good to hear for our core product and services. I’ll follow up with the customer success team on the user session analytics. We also need to ensure that we implement appropriate security measures to protect the personal data and that we only retain the data for as long as necessary for the purposes of the analytics.

Data engineer: Absolutely. We have implemented robust security measures and have a data retention policy in place that complies with the requirements. The data retention policy is automated via infrastructure policy and code that we continously monitor and run. We even have test cases to validate our data retention systems.

Privacy lawyer: Great. It's important that we take these steps to ensure compliance. Every time we add new features or new data pipelines, we need to ensure we have an accurate representation within our data inventory and data map, similar to how we monitor SLAs and Uptime on our product and services. This is equally important for customer trust.

Data engineer: Definitely. We take compliance seriously and will continue to monitor our data analytics processing activity.

Privacy lawyer: Thank you. I appreciate your diligence in this matter.

The end goal of an effective privacy program is a positive user experience, balanced with functionality and compliance for the company. Creating this positive experience requires lawyers and engineers to bridge the communication gap. Together, the two sides can be more effective than they can by themselves while delivering value to the business.

Closing The Communication Gap and Connecting The Dots — Relyance AI

Trust is paramount to business operations today. Trust and understanding among employees of an organization manifest in trust in the organization's product and business operations. Organizations often get this wrong as they approach privacy and data protection backward – leading to further breakdown in communication.

As we have seen in the concrete examples above, gaps in semantics and pushing the top down against the agile engineering product development workflows is the root cause of a lot of friction. We believe there is a better way.

At Relyance AI, we flip the traditional approach on its head. Instead of superimposing legal requirements onto engineering and machine-learning workflows, we start with the actual source of truth: the code, the data, the systems, the services, the infrastructure, the APIs, and the business processes. Then, we reverse-engineer the state of data topology and operations onto regulatory requirements, privacy and data policies, and contractual obligations with completely re-imagined workflows. At key steps, we lower and simultaneously raise the levels of abstraction to always generate and promote seamless cross-functional collaboration.

Book a live demo to see how we bridge the gap and connect the dots with the Relyance AI platform.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Title

Static and dynamic content editing

headig 5

heading 3

Heading 2

heading 1

  • 1 item
  • 2items

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Blog Post

Ridiculously Good Advice for Engineers and Lawyers to Communicate

Jan 12, 2023

The 1960s ushered in an era of generational change in America when the indecipherable vocabulary of the youth resulted in the rise of the “communication gap” with adults. The period also heralded the start of great technological change.

The beginning of the decade saw the first robot put to use in a factory, and the first commercial video game, “Pong,” was released for holiday shoppers in the mid-1960s. At the end of the decade, in 1969, the first message was sent between two computers: Programmers attempted to send the word “login,” … but the computer crashed after the “o” was typed.

In some ways, this inauspicious yet exhilarating start of the technology era mirrors the relationship – and a new communication gap – between two key stakeholders that have been driving the response to new data privacy laws and the policies to support these technological developments ever since: lawyers and engineers.

The Precision of Language and Power of Alignment

According to a leading technology consulting group, by the end of 2024, modern privacy regulations will cover 75 percent of the world’s population. As of mid-2022, there were more than 100 privacy laws worldwide, with many things in common, but no two were exactly alike. As new privacy laws are enacted, privacy policies and data practices are evolving to accommodate these additional or updated regulations. Since the EU enacted its General Data Protection Regulation (GDPR), the average word count of privacy policies has increased by 25 percent. In fact, the privacy policy for one of the most popular global apps has grown over the past 15 years from 3,000 to 12,000 words – nearly three times as long as the U.S. Constitution.

Engineers are the epicenter of building new services and programs leveraging data and machine learning on the ever-expanding nucleus of innovation. The lawyers lead and innovate in developing privacy laws, policies, and data processing guidelines. In the world we live in today, this represents the inflection point of the current communication gap. Lawyers and engineers may see the exact words, but they interpret them differently because they speak different languages and use different semantics.

On many levels, these two groups of professionals share some similar traits. Both are analytical and think rationally and critically. But ask an engineer what 1+1 equals, and they’ll immediately tell you “2.” Ask the same of a lawyer, and first, they’ll want to know why you’re asking, then they’ll consider the risk of responding, and finally, the options for responding. Engineers typically engage in rules-based, deterministic, and probabilistic but predictable thinking. They thrive with structure and process and see most things as a problem to be solved. On the other hand, lawyers navigate a minefield of nuance and ambiguity before they end up with a solution – or any number of possible solutions – or it depends.

Unfortunately, for both lawyers and engineers, data privacy is not a binary concept. Privacy laws and policies are like a candy box full of ambiguities for lawyers, with words like “promptly,” “reasonable,” and “adequate performance.” How, then, can an engineer, accustomed to building systems that require precision, create a structure to solve the problem the lawyer sets up? The task of operationalizing privacy starts to seem insurmountable. The solution begins with mutual understanding and breaking away the layers of abstractions in each direction.

To help bridge the communication gap, we’ve provided a brief guide of 12 common terms in privacy to demonstrate how lawyers and engineers think.

Term What the Lawyer Thinks What the Engineer Thinks
Processing Activity An operation or set of operations taken on personal data to achieve a business purpose or intent. Business transactions or API execution paths in systems. The processes (I/Os, CPU Cycles, Network Traffic, Memory Usage, etc.) that are invoked by a set of related programs/scripts/queries on computer systems to perform some functionality.
Suitable Safeguards Enforceable measures, policies, or procedures for transfers of personal data. Mostly technical measures such as data center protection, both physical and network-based. Also, design criteria that implement “security by design” principles, encryption at rest and motion, etc.
Security Measures A control or a security measure put in place depending on the nature of the personal data it processes and the risks associated with that processing. Policies, procedures, training / certification / background checks, encryption, design, development, and deployment policies needed to ensure that an application stack and the computers that run it are secured.
Lawful Basis Proper legal grounds on which an organization is allowed to collect and process personal data. The reason we’re collecting this personal data.
Data Types Specific types of data are considered to be personal data. Typically higher level constructs in context of the business or the application. Data structures – characters, strings, booleans, integers, floating point numbers, images, video, etc. OR API schemas endpoint or encapsulating data models or representing a specific business transaction.
Data Processing Any operation performed on specifically personal data, such as collecting, recording, organizing, structuring, storing, adapting, altering, retrieving, consulting, using, disclosing by transmission, disseminating, or making available, aligning or combining, restricting, erasing, or destroying it. Activities defined by software / scripts / queries and run on computers to gather and manipulate data to achieve the desired end-state. Typically processing in the context of compute systems or data pipeline infrastructure.
Subprocessors Risk Data usage, breach, and supply chain risk that arises from engaging a third-party data processor who has or will have access to, or who will, process personal data. The risk that arises from adding new tools that may have access to the personal data we’re collecting.
Data Subjects Natural persons who can be directly or indirectly identified by their personal data that is being processed with specific context of their persona. The individual users whose personal data we are collecting.
Data Sharing Disclosing personal data either within different parts of the organization holding the personal data or disclosing personal data to third parties. One person or group or system exchanges data with another person or group or system a set of data via network, drive, code, or verbally.
Data Retention The storing of personal data for either a specified period or only as long as needed to achieve the purpose for which the personal data was collected. The time period that you or your program holds data before purging it forever, including storage or disk backups.
Data Minimization Limiting the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose. The concept of gathering the least amount of data for the task being addressed. Reducing the feature set needed to train machine-learning algorithms.
Purpose Limitation Do not use personal data collected for more than the required business purpose. The desire to use data for a specific purpose, but due to the way it was gathered, knowledge of the source, how it is now represented or aggregated, or other technical issues, it cannot be used in the desired manner.

Crossing the Bridge to Understanding and Better Product Building

As the guide above shows, bridging the communication gap will take work on both sides. In some cases, the same language produced somewhat similar definitions, while in other cases, the definitions seemed to come from two different dictionaries; and in a couple of cases, there was no understanding of the terminology at all.

The lesson here: To build understanding - and a better product or program - lawyers and engineers need to come together early in the process. If they can’t speak the same language, at least they can cross the communication barrier to reach an understanding of key terms and concepts before the hard work begins.

The solution: The best way to bridge the gap is to navigate the different levels of abstraction. On one side, engineers need to raise their level of abstraction, such as talking about what a product, or component of a product that is part of a logical group, does with regard to personal data processing. Engineers can ask themselves, “What business or user value does it provide? What product does it map to? What are the legal implications of the legal documents like MSA and DPA we sign with our customers and vendors?” On the other side, lawyers need to lower their level of abstraction, moving beyond the legal aspects and understanding that a product is built on infrastructure, code, and micro-services that may be shared or isolated. Lawyers build careers on asking questions, so ask the right questions to surface and assess the risks.

As a common example, let’s look at Data Analytics as a processing activity at an e-commerce SaaS business where both engineers and lawyers think and communicate with an altered or modified sense of abstraction on each side.

To help correctly identify all Data Analytics processing activities, the lawyer may ask the question with a slightly lower level of abstraction:

Core product services, trends analysis, data science workflows, business intelligence type activities performed in each department of the business that constitute both short to long term personal data collection, storage, and statistical analysis on that data. What specific business outcomes and nature of processing correlated to data analysis workloads in our product and services.

On the engineering side (even though grounding the thinking in core infrastructure, code, and services they build), they may raise the level of abstraction to answer the question by producing:

Product-wide catalog of data collection sources and sinks, ETL jobs, systems or tools uses, producers and consumers data maps used for statistical analysis. Logically grouping with business purpose (instead of infrastructural grouping even though derived from the same source of information) of data analytics type workloads, reports, or metrics organized by teams or functional areas.

Similarly, on the topic of Purpose Limitation for the Data Analytics processing activity, the conversation may rather go as follows:

Privacy lawyer: We need to make sure we have obtained the appropriate consent from individuals for that processing. The purpose for processing of personal data and value provided to our users should be well-known. Individuals whose data we are processing must be informed.

Data engineer: I see. We do have consent from the individuals whose data we use for data analytics in our data science and ETL pipelines. It is embedded in our product sign-up workflow, although I am unsure about the data analytics the customer success team performs on the user logged-in session. That data doesn’t flow into our data warehouse. We use a third-party product for that.

Privacy lawyer: That's good to hear for our core product and services. I’ll follow up with the customer success team on the user session analytics. We also need to ensure that we implement appropriate security measures to protect the personal data and that we only retain the data for as long as necessary for the purposes of the analytics.

Data engineer: Absolutely. We have implemented robust security measures and have a data retention policy in place that complies with the requirements. The data retention policy is automated via infrastructure policy and code that we continously monitor and run. We even have test cases to validate our data retention systems.

Privacy lawyer: Great. It's important that we take these steps to ensure compliance. Every time we add new features or new data pipelines, we need to ensure we have an accurate representation within our data inventory and data map, similar to how we monitor SLAs and Uptime on our product and services. This is equally important for customer trust.

Data engineer: Definitely. We take compliance seriously and will continue to monitor our data analytics processing activity.

Privacy lawyer: Thank you. I appreciate your diligence in this matter.

The end goal of an effective privacy program is a positive user experience, balanced with functionality and compliance for the company. Creating this positive experience requires lawyers and engineers to bridge the communication gap. Together, the two sides can be more effective than they can by themselves while delivering value to the business.

Closing The Communication Gap and Connecting The Dots — Relyance AI

Trust is paramount to business operations today. Trust and understanding among employees of an organization manifest in trust in the organization's product and business operations. Organizations often get this wrong as they approach privacy and data protection backward – leading to further breakdown in communication.

As we have seen in the concrete examples above, gaps in semantics and pushing the top down against the agile engineering product development workflows is the root cause of a lot of friction. We believe there is a better way.

At Relyance AI, we flip the traditional approach on its head. Instead of superimposing legal requirements onto engineering and machine-learning workflows, we start with the actual source of truth: the code, the data, the systems, the services, the infrastructure, the APIs, and the business processes. Then, we reverse-engineer the state of data topology and operations onto regulatory requirements, privacy and data policies, and contractual obligations with completely re-imagined workflows. At key steps, we lower and simultaneously raise the levels of abstraction to always generate and promote seamless cross-functional collaboration.

Book a live demo to see how we bridge the gap and connect the dots with the Relyance AI platform.

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Title

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Ridiculously Good Advice for Engineers and Lawyers to Communicate

The 1960s ushered in an era of generational change in America when the indecipherable vocabulary of the youth resulted in the rise of the “communication gap” with adults. The period also heralded the start of great technological change.

The beginning of the decade saw the first robot put to use in a factory, and the first commercial video game, “Pong,” was released for holiday shoppers in the mid-1960s. At the end of the decade, in 1969, the first message was sent between two computers: Programmers attempted to send the word “login,” … but the computer crashed after the “o” was typed.

In some ways, this inauspicious yet exhilarating start of the technology era mirrors the relationship – and a new communication gap – between two key stakeholders that have been driving the response to new data privacy laws and the policies to support these technological developments ever since: lawyers and engineers.

The Precision of Language and Power of Alignment

According to a leading technology consulting group, by the end of 2024, modern privacy regulations will cover 75 percent of the world’s population. As of mid-2022, there were more than 100 privacy laws worldwide, with many things in common, but no two were exactly alike. As new privacy laws are enacted, privacy policies and data practices are evolving to accommodate these additional or updated regulations. Since the EU enacted its General Data Protection Regulation (GDPR), the average word count of privacy policies has increased by 25 percent. In fact, the privacy policy for one of the most popular global apps has grown over the past 15 years from 3,000 to 12,000 words – nearly three times as long as the U.S. Constitution.

Engineers are the epicenter of building new services and programs leveraging data and machine learning on the ever-expanding nucleus of innovation. The lawyers lead and innovate in developing privacy laws, policies, and data processing guidelines. In the world we live in today, this represents the inflection point of the current communication gap. Lawyers and engineers may see the exact words, but they interpret them differently because they speak different languages and use different semantics.

On many levels, these two groups of professionals share some similar traits. Both are analytical and think rationally and critically. But ask an engineer what 1+1 equals, and they’ll immediately tell you “2.” Ask the same of a lawyer, and first, they’ll want to know why you’re asking, then they’ll consider the risk of responding, and finally, the options for responding. Engineers typically engage in rules-based, deterministic, and probabilistic but predictable thinking. They thrive with structure and process and see most things as a problem to be solved. On the other hand, lawyers navigate a minefield of nuance and ambiguity before they end up with a solution – or any number of possible solutions – or it depends.

Unfortunately, for both lawyers and engineers, data privacy is not a binary concept. Privacy laws and policies are like a candy box full of ambiguities for lawyers, with words like “promptly,” “reasonable,” and “adequate performance.” How, then, can an engineer, accustomed to building systems that require precision, create a structure to solve the problem the lawyer sets up? The task of operationalizing privacy starts to seem insurmountable. The solution begins with mutual understanding and breaking away the layers of abstractions in each direction.

To help bridge the communication gap, we’ve provided a brief guide of 12 common terms in privacy to demonstrate how lawyers and engineers think.

Term What the Lawyer Thinks What the Engineer Thinks
Processing Activity An operation or set of operations taken on personal data to achieve a business purpose or intent. Business transactions or API execution paths in systems. The processes (I/Os, CPU Cycles, Network Traffic, Memory Usage, etc.) that are invoked by a set of related programs/scripts/queries on computer systems to perform some functionality.
Suitable Safeguards Enforceable measures, policies, or procedures for transfers of personal data. Mostly technical measures such as data center protection, both physical and network-based. Also, design criteria that implement “security by design” principles, encryption at rest and motion, etc.
Security Measures A control or a security measure put in place depending on the nature of the personal data it processes and the risks associated with that processing. Policies, procedures, training / certification / background checks, encryption, design, development, and deployment policies needed to ensure that an application stack and the computers that run it are secured.
Lawful Basis Proper legal grounds on which an organization is allowed to collect and process personal data. The reason we’re collecting this personal data.
Data Types Specific types of data are considered to be personal data. Typically higher level constructs in context of the business or the application. Data structures – characters, strings, booleans, integers, floating point numbers, images, video, etc. OR API schemas endpoint or encapsulating data models or representing a specific business transaction.
Data Processing Any operation performed on specifically personal data, such as collecting, recording, organizing, structuring, storing, adapting, altering, retrieving, consulting, using, disclosing by transmission, disseminating, or making available, aligning or combining, restricting, erasing, or destroying it. Activities defined by software / scripts / queries and run on computers to gather and manipulate data to achieve the desired end-state. Typically processing in the context of compute systems or data pipeline infrastructure.
Subprocessors Risk Data usage, breach, and supply chain risk that arises from engaging a third-party data processor who has or will have access to, or who will, process personal data. The risk that arises from adding new tools that may have access to the personal data we’re collecting.
Data Subjects Natural persons who can be directly or indirectly identified by their personal data that is being processed with specific context of their persona. The individual users whose personal data we are collecting.
Data Sharing Disclosing personal data either within different parts of the organization holding the personal data or disclosing personal data to third parties. One person or group or system exchanges data with another person or group or system a set of data via network, drive, code, or verbally.
Data Retention The storing of personal data for either a specified period or only as long as needed to achieve the purpose for which the personal data was collected. The time period that you or your program holds data before purging it forever, including storage or disk backups.
Data Minimization Limiting the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose. The concept of gathering the least amount of data for the task being addressed. Reducing the feature set needed to train machine-learning algorithms.
Purpose Limitation Do not use personal data collected for more than the required business purpose. The desire to use data for a specific purpose, but due to the way it was gathered, knowledge of the source, how it is now represented or aggregated, or other technical issues, it cannot be used in the desired manner.

Crossing the Bridge to Understanding and Better Product Building

As the guide above shows, bridging the communication gap will take work on both sides. In some cases, the same language produced somewhat similar definitions, while in other cases, the definitions seemed to come from two different dictionaries; and in a couple of cases, there was no understanding of the terminology at all.

The lesson here: To build understanding - and a better product or program - lawyers and engineers need to come together early in the process. If they can’t speak the same language, at least they can cross the communication barrier to reach an understanding of key terms and concepts before the hard work begins.

The solution: The best way to bridge the gap is to navigate the different levels of abstraction. On one side, engineers need to raise their level of abstraction, such as talking about what a product, or component of a product that is part of a logical group, does with regard to personal data processing. Engineers can ask themselves, “What business or user value does it provide? What product does it map to? What are the legal implications of the legal documents like MSA and DPA we sign with our customers and vendors?” On the other side, lawyers need to lower their level of abstraction, moving beyond the legal aspects and understanding that a product is built on infrastructure, code, and micro-services that may be shared or isolated. Lawyers build careers on asking questions, so ask the right questions to surface and assess the risks.

As a common example, let’s look at Data Analytics as a processing activity at an e-commerce SaaS business where both engineers and lawyers think and communicate with an altered or modified sense of abstraction on each side.

To help correctly identify all Data Analytics processing activities, the lawyer may ask the question with a slightly lower level of abstraction:

Core product services, trends analysis, data science workflows, business intelligence type activities performed in each department of the business that constitute both short to long term personal data collection, storage, and statistical analysis on that data. What specific business outcomes and nature of processing correlated to data analysis workloads in our product and services.

On the engineering side (even though grounding the thinking in core infrastructure, code, and services they build), they may raise the level of abstraction to answer the question by producing:

Product-wide catalog of data collection sources and sinks, ETL jobs, systems or tools uses, producers and consumers data maps used for statistical analysis. Logically grouping with business purpose (instead of infrastructural grouping even though derived from the same source of information) of data analytics type workloads, reports, or metrics organized by teams or functional areas.

Similarly, on the topic of Purpose Limitation for the Data Analytics processing activity, the conversation may rather go as follows:

Privacy lawyer: We need to make sure we have obtained the appropriate consent from individuals for that processing. The purpose for processing of personal data and value provided to our users should be well-known. Individuals whose data we are processing must be informed.

Data engineer: I see. We do have consent from the individuals whose data we use for data analytics in our data science and ETL pipelines. It is embedded in our product sign-up workflow, although I am unsure about the data analytics the customer success team performs on the user logged-in session. That data doesn’t flow into our data warehouse. We use a third-party product for that.

Privacy lawyer: That's good to hear for our core product and services. I’ll follow up with the customer success team on the user session analytics. We also need to ensure that we implement appropriate security measures to protect the personal data and that we only retain the data for as long as necessary for the purposes of the analytics.

Data engineer: Absolutely. We have implemented robust security measures and have a data retention policy in place that complies with the requirements. The data retention policy is automated via infrastructure policy and code that we continously monitor and run. We even have test cases to validate our data retention systems.

Privacy lawyer: Great. It's important that we take these steps to ensure compliance. Every time we add new features or new data pipelines, we need to ensure we have an accurate representation within our data inventory and data map, similar to how we monitor SLAs and Uptime on our product and services. This is equally important for customer trust.

Data engineer: Definitely. We take compliance seriously and will continue to monitor our data analytics processing activity.

Privacy lawyer: Thank you. I appreciate your diligence in this matter.

The end goal of an effective privacy program is a positive user experience, balanced with functionality and compliance for the company. Creating this positive experience requires lawyers and engineers to bridge the communication gap. Together, the two sides can be more effective than they can by themselves while delivering value to the business.

Closing The Communication Gap and Connecting The Dots — Relyance AI

Trust is paramount to business operations today. Trust and understanding among employees of an organization manifest in trust in the organization's product and business operations. Organizations often get this wrong as they approach privacy and data protection backward – leading to further breakdown in communication.

As we have seen in the concrete examples above, gaps in semantics and pushing the top down against the agile engineering product development workflows is the root cause of a lot of friction. We believe there is a better way.

At Relyance AI, we flip the traditional approach on its head. Instead of superimposing legal requirements onto engineering and machine-learning workflows, we start with the actual source of truth: the code, the data, the systems, the services, the infrastructure, the APIs, and the business processes. Then, we reverse-engineer the state of data topology and operations onto regulatory requirements, privacy and data policies, and contractual obligations with completely re-imagined workflows. At key steps, we lower and simultaneously raise the levels of abstraction to always generate and promote seamless cross-functional collaboration.

Book a live demo to see how we bridge the gap and connect the dots with the Relyance AI platform.

Blog Post

Ridiculously Good Advice for Engineers and Lawyers to Communicate

Aug 17, 2022

Get the whitepaper

Required field*

Ridiculously Good Advice for Engineers and Lawyers to Communicate

The 1960s ushered in an era of generational change in America when the indecipherable vocabulary of the youth resulted in the rise of the “communication gap” with adults. The period also heralded the start of great technological change.

The beginning of the decade saw the first robot put to use in a factory, and the first commercial video game, “Pong,” was released for holiday shoppers in the mid-1960s. At the end of the decade, in 1969, the first message was sent between two computers: Programmers attempted to send the word “login,” … but the computer crashed after the “o” was typed.

In some ways, this inauspicious yet exhilarating start of the technology era mirrors the relationship – and a new communication gap – between two key stakeholders that have been driving the response to new data privacy laws and the policies to support these technological developments ever since: lawyers and engineers.

The Precision of Language and Power of Alignment

According to a leading technology consulting group, by the end of 2024, modern privacy regulations will cover 75 percent of the world’s population. As of mid-2022, there were more than 100 privacy laws worldwide, with many things in common, but no two were exactly alike. As new privacy laws are enacted, privacy policies and data practices are evolving to accommodate these additional or updated regulations. Since the EU enacted its General Data Protection Regulation (GDPR), the average word count of privacy policies has increased by 25 percent. In fact, the privacy policy for one of the most popular global apps has grown over the past 15 years from 3,000 to 12,000 words – nearly three times as long as the U.S. Constitution.

Engineers are the epicenter of building new services and programs leveraging data and machine learning on the ever-expanding nucleus of innovation. The lawyers lead and innovate in developing privacy laws, policies, and data processing guidelines. In the world we live in today, this represents the inflection point of the current communication gap. Lawyers and engineers may see the exact words, but they interpret them differently because they speak different languages and use different semantics.

On many levels, these two groups of professionals share some similar traits. Both are analytical and think rationally and critically. But ask an engineer what 1+1 equals, and they’ll immediately tell you “2.” Ask the same of a lawyer, and first, they’ll want to know why you’re asking, then they’ll consider the risk of responding, and finally, the options for responding. Engineers typically engage in rules-based, deterministic, and probabilistic but predictable thinking. They thrive with structure and process and see most things as a problem to be solved. On the other hand, lawyers navigate a minefield of nuance and ambiguity before they end up with a solution – or any number of possible solutions – or it depends.

Unfortunately, for both lawyers and engineers, data privacy is not a binary concept. Privacy laws and policies are like a candy box full of ambiguities for lawyers, with words like “promptly,” “reasonable,” and “adequate performance.” How, then, can an engineer, accustomed to building systems that require precision, create a structure to solve the problem the lawyer sets up? The task of operationalizing privacy starts to seem insurmountable. The solution begins with mutual understanding and breaking away the layers of abstractions in each direction.

To help bridge the communication gap, we’ve provided a brief guide of 12 common terms in privacy to demonstrate how lawyers and engineers think.

Term What the Lawyer Thinks What the Engineer Thinks
Processing Activity An operation or set of operations taken on personal data to achieve a business purpose or intent. Business transactions or API execution paths in systems. The processes (I/Os, CPU Cycles, Network Traffic, Memory Usage, etc.) that are invoked by a set of related programs/scripts/queries on computer systems to perform some functionality.
Suitable Safeguards Enforceable measures, policies, or procedures for transfers of personal data. Mostly technical measures such as data center protection, both physical and network-based. Also, design criteria that implement “security by design” principles, encryption at rest and motion, etc.
Security Measures A control or a security measure put in place depending on the nature of the personal data it processes and the risks associated with that processing. Policies, procedures, training / certification / background checks, encryption, design, development, and deployment policies needed to ensure that an application stack and the computers that run it are secured.
Lawful Basis Proper legal grounds on which an organization is allowed to collect and process personal data. The reason we’re collecting this personal data.
Data Types Specific types of data are considered to be personal data. Typically higher level constructs in context of the business or the application. Data structures – characters, strings, booleans, integers, floating point numbers, images, video, etc. OR API schemas endpoint or encapsulating data models or representing a specific business transaction.
Data Processing Any operation performed on specifically personal data, such as collecting, recording, organizing, structuring, storing, adapting, altering, retrieving, consulting, using, disclosing by transmission, disseminating, or making available, aligning or combining, restricting, erasing, or destroying it. Activities defined by software / scripts / queries and run on computers to gather and manipulate data to achieve the desired end-state. Typically processing in the context of compute systems or data pipeline infrastructure.
Subprocessors Risk Data usage, breach, and supply chain risk that arises from engaging a third-party data processor who has or will have access to, or who will, process personal data. The risk that arises from adding new tools that may have access to the personal data we’re collecting.
Data Subjects Natural persons who can be directly or indirectly identified by their personal data that is being processed with specific context of their persona. The individual users whose personal data we are collecting.
Data Sharing Disclosing personal data either within different parts of the organization holding the personal data or disclosing personal data to third parties. One person or group or system exchanges data with another person or group or system a set of data via network, drive, code, or verbally.
Data Retention The storing of personal data for either a specified period or only as long as needed to achieve the purpose for which the personal data was collected. The time period that you or your program holds data before purging it forever, including storage or disk backups.
Data Minimization Limiting the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose. The concept of gathering the least amount of data for the task being addressed. Reducing the feature set needed to train machine-learning algorithms.
Purpose Limitation Do not use personal data collected for more than the required business purpose. The desire to use data for a specific purpose, but due to the way it was gathered, knowledge of the source, how it is now represented or aggregated, or other technical issues, it cannot be used in the desired manner.

Crossing the Bridge to Understanding and Better Product Building

As the guide above shows, bridging the communication gap will take work on both sides. In some cases, the same language produced somewhat similar definitions, while in other cases, the definitions seemed to come from two different dictionaries; and in a couple of cases, there was no understanding of the terminology at all.

The lesson here: To build understanding - and a better product or program - lawyers and engineers need to come together early in the process. If they can’t speak the same language, at least they can cross the communication barrier to reach an understanding of key terms and concepts before the hard work begins.

The solution: The best way to bridge the gap is to navigate the different levels of abstraction. On one side, engineers need to raise their level of abstraction, such as talking about what a product, or component of a product that is part of a logical group, does with regard to personal data processing. Engineers can ask themselves, “What business or user value does it provide? What product does it map to? What are the legal implications of the legal documents like MSA and DPA we sign with our customers and vendors?” On the other side, lawyers need to lower their level of abstraction, moving beyond the legal aspects and understanding that a product is built on infrastructure, code, and micro-services that may be shared or isolated. Lawyers build careers on asking questions, so ask the right questions to surface and assess the risks.

As a common example, let’s look at Data Analytics as a processing activity at an e-commerce SaaS business where both engineers and lawyers think and communicate with an altered or modified sense of abstraction on each side.

To help correctly identify all Data Analytics processing activities, the lawyer may ask the question with a slightly lower level of abstraction:

Core product services, trends analysis, data science workflows, business intelligence type activities performed in each department of the business that constitute both short to long term personal data collection, storage, and statistical analysis on that data. What specific business outcomes and nature of processing correlated to data analysis workloads in our product and services.

On the engineering side (even though grounding the thinking in core infrastructure, code, and services they build), they may raise the level of abstraction to answer the question by producing:

Product-wide catalog of data collection sources and sinks, ETL jobs, systems or tools uses, producers and consumers data maps used for statistical analysis. Logically grouping with business purpose (instead of infrastructural grouping even though derived from the same source of information) of data analytics type workloads, reports, or metrics organized by teams or functional areas.

Similarly, on the topic of Purpose Limitation for the Data Analytics processing activity, the conversation may rather go as follows:

Privacy lawyer: We need to make sure we have obtained the appropriate consent from individuals for that processing. The purpose for processing of personal data and value provided to our users should be well-known. Individuals whose data we are processing must be informed.

Data engineer: I see. We do have consent from the individuals whose data we use for data analytics in our data science and ETL pipelines. It is embedded in our product sign-up workflow, although I am unsure about the data analytics the customer success team performs on the user logged-in session. That data doesn’t flow into our data warehouse. We use a third-party product for that.

Privacy lawyer: That's good to hear for our core product and services. I’ll follow up with the customer success team on the user session analytics. We also need to ensure that we implement appropriate security measures to protect the personal data and that we only retain the data for as long as necessary for the purposes of the analytics.

Data engineer: Absolutely. We have implemented robust security measures and have a data retention policy in place that complies with the requirements. The data retention policy is automated via infrastructure policy and code that we continously monitor and run. We even have test cases to validate our data retention systems.

Privacy lawyer: Great. It's important that we take these steps to ensure compliance. Every time we add new features or new data pipelines, we need to ensure we have an accurate representation within our data inventory and data map, similar to how we monitor SLAs and Uptime on our product and services. This is equally important for customer trust.

Data engineer: Definitely. We take compliance seriously and will continue to monitor our data analytics processing activity.

Privacy lawyer: Thank you. I appreciate your diligence in this matter.

The end goal of an effective privacy program is a positive user experience, balanced with functionality and compliance for the company. Creating this positive experience requires lawyers and engineers to bridge the communication gap. Together, the two sides can be more effective than they can by themselves while delivering value to the business.

Closing The Communication Gap and Connecting The Dots — Relyance AI

Trust is paramount to business operations today. Trust and understanding among employees of an organization manifest in trust in the organization's product and business operations. Organizations often get this wrong as they approach privacy and data protection backward – leading to further breakdown in communication.

As we have seen in the concrete examples above, gaps in semantics and pushing the top down against the agile engineering product development workflows is the root cause of a lot of friction. We believe there is a better way.

At Relyance AI, we flip the traditional approach on its head. Instead of superimposing legal requirements onto engineering and machine-learning workflows, we start with the actual source of truth: the code, the data, the systems, the services, the infrastructure, the APIs, and the business processes. Then, we reverse-engineer the state of data topology and operations onto regulatory requirements, privacy and data policies, and contractual obligations with completely re-imagined workflows. At key steps, we lower and simultaneously raise the levels of abstraction to always generate and promote seamless cross-functional collaboration.

Book a live demo to see how we bridge the gap and connect the dots with the Relyance AI platform.

Blog Post

Ridiculously Good Advice for Engineers and Lawyers to Communicate

Aug 17, 2022

Watch the video

Required field*

More resources