Blog <-

Release Risks: v1.0-Beta

Mona Fawzy & Martina Larkin

3.25.2022

Everyone agrees we need a safer and more trustworthy environment in which technology companies operate, but few agree on what that means or how to do it. 

Most tech companies are opaque about the processes and technologies they use to address harms. While many large tech companies do have best practices and assessments for risk management and cyber security, we need more transparency as well as rules and processes to address the trust and safety concerns of users. We also need to determine how to remove or reduce the spread of harmful content without suppressing online innovation and expression. 

At System, we’ve decided to speak openly about the risks we face in developing our public resource, and the types of processes we’ve put in place to mitigate them. Our mission at System is to relate everything, to help the world see and solve anything, as a system.

To achieve this mission, the company we are building is as important as our technology. For example, we’re incorporated as a Public Benefit Corporation because we think it is critical we are as concerned about our impact on society as we are about our financial sustainability. We are driven by a charter which outlines a strong set of values.

As we build System, we frequently sit down as a team to discuss any potential unintended consequences of new features we release on system.com. We rank the risks we identify in terms of severity and likelihood, and we devise and coordinate actions to mitigate those risks. These actions are centered around our values: 

  • Rational We embrace the scientific method. We test, learn, and change our minds with new evidence. We recognize the limits of data
  • Open We owe it to our users to be excruciatingly transparent. We believe in open data, open science, and open source. We apply this same philosophy internally, opting for maximum possible openness.
  • Inclusive We respect and celebrate diversity in every form. We believe diverse and inclusive teams win. Building software that reflects the world’s complexity requires a team and dialogue that reflects the world’s diversity.
  • Impactful We are passionate about making an outsized positive impact on society. We are optimists who believe that positive change is possible. We believe that tech can be harnessed for good, but recognize and mitigate the potential for harm.
  • Humble To build what we’re setting out to build, we must be humble, and impartial. If we are successful, it will largely be because of our users.

Here are the top risk-related questions we’ve identified for v1.0 beta and the actions we’ve taken to mitigate them to the best of our abilities and resources.

Can System help spread misinformation?

It is our responsibility to:

  • promote content that adheres to the highest standards and best practices in science.
  • provide users with tools and services that disambiguate reliable  information from incorrect information or misinformation. 
  • provide users with tools to flag information they deem suspicious or problematic and have processes to review that information and communicate any decisions made

How we are mitigating this risk:

Reproducibility We use a score of reproducibility to strengthen a relationship.  Each relationship on System is labeled for its potential to be reproduced. This is based on the completeness of the information and material provided by the author. Read more about our methodology here.

External peer review Contributions from a scientific paper carries information about the paper, including the journal it was published in. Papers published in high impact factor journals go through rigorous peer review. We prioritize this information. Though peer-review is the most adopted standard of quality, it's not perfect by any means. 

Risk level identified as low. Only a few trusted subject matter experts at System can contribute content today. For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.

Eventually, a wider community will be able to contribute and we will offer pipelines by which this content travels through for verification and removal.

Can System amplify bias and bigotry in society?

It is our responsibility to:

  • ensure that information about sensitive topics is scrutinized by humans who verify the findings before publication.
  • act swiftly to review information that is flagged by the community as being suspicious, potentially offensive, potentially biased, or malicious.

How we are mitigating this risk:

We are in the early stage of identifying and forming guidelines, framework and workflow.

Topic and Metric Naming We respect and are aware that language use will vary across our communities. We are developing a set of guidelines to guide the choice of topic labels and metric labels. The guidelines will be based on known best practices and will be informed by, vetted and extended by our community.  The guidelines will be regularly revisited to address new areas of concern and sensitive topics

Topic and Metric Review Workflow We are developing a framework and workflow by which topics and metrics can be reviewed for publication. We intend to rank topic and metric assignments so that we can prioritize areas of critical concern.

Combatting algorithmic bias We ask questions like, ‘how was this data collected?’, and ‘what assumptions are we making’. We actively try to keep a lens on inclusive data practices. We promote a transparent culture that establishes ethical guidelines and empowers the community to  speak up if they see something problematic. Whenever possible, we encourage contributors to include a link to the data used in a model or study.

Risk level identified as low.  For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.

Can bad actors use the new knowledge on System to cause harm?

It is our responsibility to: 

  • identify malicious activities that are harmful to our community and to society.

How we are mitigating this risk:

Bot and fraud detection Both our community of users and our software are able to flag and immediately alert us when fraudulent or bot activity is detected. Our team acts immediately to address or block any fraudulent behavior.

Usage auditing System audits how users interact with system.com to help us improve the product. This data can also be useful to help detect behavior that might be suspicious. 

Risk level identified as medium. We expose a limited set of APIs on v1.0-beta.

Can System impact the health and wellbeing of a user?

It is our responsibility to: 

  • actively protect the health and well being of users while using our platform

How we are mitigating this risk:

Avoiding addictive design At System, when introducing new features or services, we ask “who does  this benefit?” and “how can we better safeguard a user’s health?” User manipulation and exploitation strategies are strictly in violation of our charter.

Impact mapping We use this planning technique to identify the human behavioral changes that must occur or not occur for a product to be successful. More about impact mapping here.

Risk level identified as low.  We will produce tools and guardrails to help the community better protect against this risk. 

Are there safeguards for user data?

It is our responsibility to: 

  • safeguard the privacy of our users.

How we are mitigating this risk:

Extensive measures to protect PII We treat the privacy and security of user data as our first priority and is of utmost importance to us. We will not release product features that compromise or exploit user data in any way. We embrace and promote all standards to protect PII, and empower our team to proactively take all necessary precautions.More about our data security policies here.

Risk level identified as low. We minimally collect and do not share any PII with external services.

We will regularly share with you the short- and medium-term risks we see and the actions we’re taking and we will update our risk framework with each new version of our product. However, we will not communicate an action when we believe its dissemination would likely increase the risk we seek to mitigate (for example, by learning how to game one of our algorithms or processes).

If you are not already a member, we invite you to connect with us and the community on Slack. We are excited to hear all of your thoughts and feedback, particularly around any additional questions we should be posing, or other strategies to employ while on our mission.

Join our community and together we can help the world see the whole system.

Release Risks: v1.0-Beta

Mona Fawzy & Martina Larkin

March 25, 2022

Everyone agrees we need a safer and more trustworthy environment in which technology companies operate, but few agree on what that means or how to do it. 

Most tech companies are opaque about the processes and technologies they use to address harms. While many large tech companies do have best practices and assessments for risk management and cyber security, we need more transparency as well as rules and processes to address the trust and safety concerns of users. We also need to determine how to remove or reduce the spread of harmful content without suppressing online innovation and expression. 

At System, we’ve decided to speak openly about the risks we face in developing our public resource, and the types of processes we’ve put in place to mitigate them. Our mission at System is to relate everything, to help the world see and solve anything, as a system.

To achieve this mission, the company we are building is as important as our technology. For example, we’re incorporated as a Public Benefit Corporation because we think it is critical we are as concerned about our impact on society as we are about our financial sustainability. We are driven by a charter which outlines a strong set of values.

As we build System, we frequently sit down as a team to discuss any potential unintended consequences of new features we release on system.com. We rank the risks we identify in terms of severity and likelihood, and we devise and coordinate actions to mitigate those risks. These actions are centered around our values: 

  • Rational We embrace the scientific method. We test, learn, and change our minds with new evidence. We recognize the limits of data
  • Open We owe it to our users to be excruciatingly transparent. We believe in open data, open science, and open source. We apply this same philosophy internally, opting for maximum possible openness.
  • Inclusive We respect and celebrate diversity in every form. We believe diverse and inclusive teams win. Building software that reflects the world’s complexity requires a team and dialogue that reflects the world’s diversity.
  • Impactful We are passionate about making an outsized positive impact on society. We are optimists who believe that positive change is possible. We believe that tech can be harnessed for good, but recognize and mitigate the potential for harm.
  • Humble To build what we’re setting out to build, we must be humble, and impartial. If we are successful, it will largely be because of our users.

Here are the top risk-related questions we’ve identified for v1.0 beta and the actions we’ve taken to mitigate them to the best of our abilities and resources.

Can System help spread misinformation?

It is our responsibility to:

  • promote content that adheres to the highest standards and best practices in science.
  • provide users with tools and services that disambiguate reliable  information from incorrect information or misinformation. 
  • provide users with tools to flag information they deem suspicious or problematic and have processes to review that information and communicate any decisions made

How we are mitigating this risk:

Reproducibility We use a score of reproducibility to strengthen a relationship.  Each relationship on System is labeled for its potential to be reproduced. This is based on the completeness of the information and material provided by the author. Read more about our methodology here.

External peer review Contributions from a scientific paper carries information about the paper, including the journal it was published in. Papers published in high impact factor journals go through rigorous peer review. We prioritize this information. Though peer-review is the most adopted standard of quality, it's not perfect by any means. 

Risk level identified as low. Only a few trusted subject matter experts at System can contribute content today. For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.

Eventually, a wider community will be able to contribute and we will offer pipelines by which this content travels through for verification and removal.

Can System amplify bias and bigotry in society?

It is our responsibility to:

  • ensure that information about sensitive topics is scrutinized by humans who verify the findings before publication.
  • act swiftly to review information that is flagged by the community as being suspicious, potentially offensive, potentially biased, or malicious.

How we are mitigating this risk:

We are in the early stage of identifying and forming guidelines, framework and workflow.

Topic and Metric Naming We respect and are aware that language use will vary across our communities. We are developing a set of guidelines to guide the choice of topic labels and metric labels. The guidelines will be based on known best practices and will be informed by, vetted and extended by our community.  The guidelines will be regularly revisited to address new areas of concern and sensitive topics

Topic and Metric Review Workflow We are developing a framework and workflow by which topics and metrics can be reviewed for publication. We intend to rank topic and metric assignments so that we can prioritize areas of critical concern.

Combatting algorithmic bias We ask questions like, ‘how was this data collected?’, and ‘what assumptions are we making’. We actively try to keep a lens on inclusive data practices. We promote a transparent culture that establishes ethical guidelines and empowers the community to  speak up if they see something problematic. Whenever possible, we encourage contributors to include a link to the data used in a model or study.

Risk level identified as low.  For this first public release (v1.0-beta), the determination of what datasets, models, and papers statistics are retrieved from currently falls to members of our team and to users who are beta testing the tools we’ve built to contribute to System.

Can bad actors use the new knowledge on System to cause harm?

It is our responsibility to: 

  • identify malicious activities that are harmful to our community and to society.

How we are mitigating this risk:

Bot and fraud detection Both our community of users and our software are able to flag and immediately alert us when fraudulent or bot activity is detected. Our team acts immediately to address or block any fraudulent behavior.

Usage auditing System audits how users interact with system.com to help us improve the product. This data can also be useful to help detect behavior that might be suspicious. 

Risk level identified as medium. We expose a limited set of APIs on v1.0-beta.

Can System impact the health and wellbeing of a user?

It is our responsibility to: 

  • actively protect the health and well being of users while using our platform

How we are mitigating this risk:

Avoiding addictive design At System, when introducing new features or services, we ask “who does  this benefit?” and “how can we better safeguard a user’s health?” User manipulation and exploitation strategies are strictly in violation of our charter.

Impact mapping We use this planning technique to identify the human behavioral changes that must occur or not occur for a product to be successful. More about impact mapping here.

Risk level identified as low.  We will produce tools and guardrails to help the community better protect against this risk. 

Are there safeguards for user data?

It is our responsibility to: 

  • safeguard the privacy of our users.

How we are mitigating this risk:

Extensive measures to protect PII We treat the privacy and security of user data as our first priority and is of utmost importance to us. We will not release product features that compromise or exploit user data in any way. We embrace and promote all standards to protect PII, and empower our team to proactively take all necessary precautions.More about our data security policies here.

Risk level identified as low. We minimally collect and do not share any PII with external services.

We will regularly share with you the short- and medium-term risks we see and the actions we’re taking and we will update our risk framework with each new version of our product. However, we will not communicate an action when we believe its dissemination would likely increase the risk we seek to mitigate (for example, by learning how to game one of our algorithms or processes).

If you are not already a member, we invite you to connect with us and the community on Slack. We are excited to hear all of your thoughts and feedback, particularly around any additional questions we should be posing, or other strategies to employ while on our mission.

Join our community and together we can help the world see the whole system.

Filed Under:

Release Notes

Release Notes