[llvm-dev] [RFC] LLVM Security Group and Process

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
28 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev

On Nov 26, 2019, at 6:31 PM, Kostya Serebryany <[hidden email]> wrote:

On this list: Should we create a security group and process?

Yes, as long as it is a funded mandate by several major contributors. 
We can't run it as a volunteer group. 

I expect that major corporate contributors will want some of their employees involved. Is that the kind of funding you’re looking for? Or something additional?


Also, someone (this group, or another) should do proactive work on hardening the 
sensitive parts of LLVM, otherwise it will be a whack-a-mole. 
Of course, will need to decide what are those sensitive parts first. 
 
On this list: Do you agree with the goals listed in the proposal?

In general - yes. 
Although some details worry me. 
E.g. I would try to be stricter with disclosure dates. 
> public within approximately fourteen weeks of the fix landing in the LLVM repository
is too slow imho. it hurts the attackers less than it hurts the project. 
oss-fuzz will adhere to the 90/30 policy

This specific bullet followed the Chromium policy:

Quoting it:
Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped.

Therefore, we make all security bugs public within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs).

I think the same rationale applies to LLVM.


On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

The process seems to be too complicated, but no strong opinion here. 
Do we have another example from a project of similar scale? 

Yes, the email lists some. WebKit’s process resembles the one I propose, but I’ve expanded some of the points which it left unsaid. i.e. in many cases it has the same content, but not as spelled out.


On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

commented on GitHub vs crbug
 
On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
Are you an LLVM contributor (individual or representing a company)?
Yes,  representing Google. 
Are you involved with security aspects of LLVM (if so, which)?

To some extent:
* my team owns tools that tend to find security bugs (sanitizers, libFuzzer)
* my team co-owns oss-fuzz, which automatically sends security bugs to LLVM 
 
Do you maintain significant downstream LLVM changes?

no
 
Do you package and deploy LLVM for others to use (if so, to how many people)?

not my team
 
Is your LLVM distribution based on the open-source releases?

no
 
How often do you usually deploy LLVM?

In some ecosystems LLVM is deployed ~ every two-three weeks. 
In others it takes months. 
 
How fast can you deploy an update?

For some ecosystems we can turn around in several days. 
For others I don't know.  
 
Does your LLVM distribution handle untrusted inputs, and what kind?

Third party OSS code that is often pulled automatically. 
 
What’s the threat model for your LLVM distribution?

Speculating here. I am not a real security expert myself
* A developer getting a bug report and running clang/llvm on the "buggy" input, compromising the developer's desktop. 
* A major opensource project is compromised and it's code is changed in a subtle way that triggers a vulnerability in Clang/LLVM.
  The opensource code is pulled into an internal repo and is compiled by clang, compromising a machine on the build farm. 
* A vulnerability in a run-time library, e.g. crbug.com/606626 or crbug.com/994957
* (???) Vulnerability in a LLVM-based JIT triggered by untrusted bitcode. <2-nd hand knowledge>
* (???) an optimizer introducing a vulnerability into otherwise memory-safe code (we've seen a couple of such in load & store widening)
* (???) deficiency in a hardening pass (CFI, stack protector, shadow call stack) making the hardening inefficient.   

My 2c on the policies: if we actually treat some area of LLVM security-critical, 
we must not only ensure that a reported bug is fixed, but also that the affected component gets
additional testing, fuzzing, and hardening afterwards. 
E.g. for crbug.com/994957 I'd really like to see a fuzz target as a form of regression testing.

Thanks, this is great stuff!


--kcc 
 

On Sat, Nov 16, 2019 at 8:23 AM JF Bastien via llvm-dev <[hidden email]> wrote:

Hello compiler enthusiasts,


The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.


A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
  2. On this list: Do you agree with the goals listed in the proposal?
  3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
    2. Are you involved with security aspects of LLVM (if so, which)?
    3. Do you maintain significant downstream LLVM changes?
    4. Do you package and deploy LLVM for others to use (if so, to how many people)?
    5. Is your LLVM distribution based on the open-source releases?
    6. How often do you usually deploy LLVM?
    7. How fast can you deploy an update?
    8. Does your LLVM distribution handle untrusted inputs, and what kind?
    9. What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:
  1. Yes! We should create a security group and process.
  2. We agree with the goals listed.
  3. We think the proposal is exactly right, but would like to hear the community’s opinions.
  4. Here’s how we approach the security of LLVM:
    1. I contribute to LLVM as an Apple employee.
    2. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
    3. We maintain significant downstream changes.
    4. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
    5. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
    6. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
    7. This depends on which release of LLVM is affected.
    8. Yes, our distribution sometimes handles untrusted input.
    9. The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
In reply to this post by Jonas Paulsson via llvm-dev


On Dec 3, 2019, at 9:03 AM, James Y Knight <[hidden email]> wrote:



On Mon, Nov 25, 2019 at 5:38 PM JF Bastien <[hidden email]> wrote:


On Nov 25, 2019, at 7:36 AM, James Y Knight <[hidden email]> wrote:



On Tue, Nov 19, 2019 at 10:46 AM JF Bastien <[hidden email]> wrote:
And I do agree that if someone were to come in and put in the significant amounts of work to make LLVM directly usable in security-sensitive places, then we could support that. But none of that should have anything to do with the security group or its membership. All of that work and discussion, and the decision to support it in the end, should be done as a project-wide discussion and decision, just like anything else that's worked on.

Here’s where we disagree: how to get from nothing being security to the right things being security.

I want to put that power in the hands of the security group, because they’d be the ones with experience handling security issues, defining security boundaries, fixing issues in those boundaries, etc. I’m worried that the community as a whole would legislate things as needing to be secure, without anyone in the security group able or willing to make it so. That’s an undesirable outcome because it sets them up for failure.

Of course neither of us is saying that the community should dictate to the security group, nor that the security group should dictate to the community. It should be a discussion. I agree with you that, in that transition period from no security to right security there might be cases where the security group disappoints the community, behind temporarily closed doors. There might be mistakes, an issue which should have been treated as security related won’t be. I would rather trust the security group, expect that it’ll do outreach when it feels unqualified to handle an issue, and fix any mistakes it makes if it happens. Doing so is better than where we are today.

My worry is actually the inverse -- that there may be a tendency to treat more issues as "security" than should be. When some bug is reported via the security process, I suspect there will be a default-presumption towards using the security process to resolve it, with all the downsides that go along with that.

Agreed, that polarity is also a risk. I don’t see how to fix this issue either, except to trust the security group. Its members will be more competent at doing the right thing than the general LLVM community because they’ve dealt with this stuff before.

Again, I find it entirely reasonable to place trust in a small subset of the members of the LLVM community to do the right thing in response to security issues which must remain temporarily secret. It's infeasible to allow the entire community to participate. I just don't want to entrust anything else to the Security Group, as an organization, because it's unnecessary (despite that they would likely be entirely worthy of that trust).
What I want is for it to be clear that certain kinds of issues are currently explicitly out-of-scope.
Yes I want this list, but I don’t think we need it now. Once we’ve got a group of experts looking at security issues they can incrementally figure out that list. Do you think that’s acceptable?

We know now, even before any issues have been reported through this process, what some of the areas of concern are going to be. Some have been mentioned before on this thread, and others likely have not. I would like to see it explicitly called out, up front, how we expect to treat certain issues without waiting for them to be reported.

Why do I want that? Because I want the security group's mission statement and mandate from the community to be clear. If there's disagreement about which sorts of things should or should not be treated as security issues (which I suspect there may well be), I'd like that to be hashed out in the open now, rather than delaying any such debate until such a time as it must be hashed out in private by the Security Group in response to a concrete private vulnerability report.

However, I agree it's not necessary for you to define this immediately. If you'd like to attempt to find other volunteers to author those policies, rather than doing it yourself, I see absolutely no problem with that. But I would still like to see such a document get proposed and reviewed via the project's usual open discussion forum (mailing lists, code reviews on new policy docs, etc), as soon as possible, in order to reduce surprises as much as possible. (Recognizing that it cannot and should not attempt to cover every eventuality.)

A separate discussion as you describe sounds good to me.


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
In reply to this post by Jonas Paulsson via llvm-dev
On 15 Nov 2019, at 19:58, JF Bastien via llvm-dev <[hidden email]> wrote:
The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.

A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
Yes, I think that is a good idea.

  1. On this list: Do you agree with the goals listed in the proposal?
Yes, but I hope we can clarify what "time to investigate" and "timely notification" means, in more precise terms.

  1. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
With regards to the embargo time limits, I think that 90 days is a rather long minimum time.  Remember that major LLVM release cycles are just ~180 days!  Then again, I realize that some downstream organizations have very elaborate release cycle procedures.  I just wish they were shorter for critical security issues.

I also think that fourteen weeks from a commit landing to making the issue public is not really doable.  Are we really going to commit something with a message "what this commit does is a secret, see you in 14 weeks" ?  And then expect nobody to just look at what the changes entail, and derive the actual issue from that?  It seems unrealistic, to say the least.

  1. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
I'll post the above on the review

  1. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
I am both an individual contributor to LLVM, and a member of the FreeBSD community, where I am mostly responsible for maintaining the LLVM fork (well, just plain LLVM with a few hacks and additional patches) in the FreeBSD source tree.

    1. Are you involved with security aspects of LLVM (if so, which)?
Not especially, though I have been involved with diagnosing quite a number of LLVM crash bugs, of which at least some could possibly be abused in security contexts.  That said, I have not been actively searching for any security holes.

    1. Do you maintain significant downstream LLVM changes?
No, we try to keep the differences between stock LLVM components and the FreeBSD versions as small as possible.  We do, however apply a few minor customizations, and quite a number of post-release patches.  Most of these are to fix issues with compiling the rather large FreeBSD ports collection (roughly 33,000 of them), for a number of different architectures.

    1. Do you package and deploy LLVM for others to use (if so, to how many people)?
Yes, we build LLVM components such as clang, compiler-rt, libc++, lld, lldb and a number of llvm tools as part of the FreeBSD base system.  These are shipped as releases in binary and source code form.  Regular snapshots (roughly every week) are also made available.

FreeBSD is used by quite a number of people and organizations, but since the project does not actively track its users, I don't know any hard (or even semi-soft :) numbers.  There are also a number of projects downstream from FreeBSD, such as FreeNAS and TrueOS.

    1. Is your LLVM distribution based on the open-source releases?
Yes, and almost all the patches we apply are from the regular LLVM trunk or master.

    1. How often do you usually deploy LLVM?
Normally we update to each new major and minor release as they appear, and we are also involved in the testing process before those releases.  Since not every bug that affects FreeBSD can get fixed, we also regularly apply fixes post LLVM releases.

    1. How fast can you deploy an update?
If any issue affects a released version of FreeBSD, it will be handled by the FreeBSD Security Team.  They will investigate the severity of the issue and the impact, verify if the fix(es) apply and have the promised mitigating effect, and obviously determine if there are no negative side effects.  Then they will build new binary bits that go out via the binary update system we use, a.k.a freebsd-update.  Building the bits can be done in a few days, but the investigation is harder to pin down time-wise.

    1. Does your LLVM distribution handle untrusted inputs, and what kind?
The toolchain parts, e.g. clang, lld and lldb, can obviously be used for arbitrary source code, but will seldom be useful for e.g. privilege escalations.  Other parts, such as compiler-rt and libc++, are installed as system wide dynamic libraries.  Vulnerabilities in these could affect any application on the system which links to them.

    1. What’s the threat model for your LLVM distribution?
As described in the previous item, specifically the compiler-rt and libc++ libraries can be dependencies of many applications, some of which will be part of the system, and also be security sensitive.  (For example, FreeBSD's device daemon devd <https://www.freebsd.org/cgi/man.cgi?query=devd> is written in C++, and linked to libc++.)


Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.

I haven't interacted with any of the above security groups.  But I have interacted with the FreeBSD Security Team, which has a page here <https://www.freebsd.org/security/>. Their process is fairly straightforward, and most of it is handled via email and a private Bugzilla instance.

-Dimitry


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

signature.asc (230 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
In reply to this post by Jonas Paulsson via llvm-dev


On Wed, Dec 4, 2019 at 3:36 PM JF Bastien <[hidden email]> wrote:

On Nov 26, 2019, at 6:31 PM, Kostya Serebryany <[hidden email]> wrote:

On this list: Should we create a security group and process?

Yes, as long as it is a funded mandate by several major contributors. 
We can't run it as a volunteer group. 

I expect that major corporate contributors will want some of their employees involved. Is that the kind of funding you’re looking for?

Yes! 
 
Or something additional?


Also, someone (this group, or another) should do proactive work on hardening the 
sensitive parts of LLVM, otherwise it will be a whack-a-mole. 
Of course, will need to decide what are those sensitive parts first. 
 
On this list: Do you agree with the goals listed in the proposal?

In general - yes. 
Although some details worry me. 
E.g. I would try to be stricter with disclosure dates. 
> public within approximately fourteen weeks of the fix landing in the LLVM repository
is too slow imho. it hurts the attackers less than it hurts the project. 
oss-fuzz will adhere to the 90/30 policy

This specific bullet followed the Chromium policy:

Quoting it:
Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped.

Therefore, we make all security bugs public within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs).

I think the same rationale applies to LLVM.

ACK. If the OSS-Fuzz 90/30 policy doesn't work for LLVM, 
we could spin an independent instance of ClusterFuzz.
(Of course, at some extra maintenance and VM cost)
Although I would rather not do that if we can avoid it.  


On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

The process seems to be too complicated, but no strong opinion here. 
Do we have another example from a project of similar scale? 

Yes, the email lists some. WebKit’s process resembles the one I propose, but I’ve expanded some of the points which it left unsaid. i.e. in many cases it has the same content, but not as spelled out.


On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

commented on GitHub vs crbug
 
On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
Are you an LLVM contributor (individual or representing a company)?
Yes,  representing Google. 
Are you involved with security aspects of LLVM (if so, which)?

To some extent:
* my team owns tools that tend to find security bugs (sanitizers, libFuzzer)
* my team co-owns oss-fuzz, which automatically sends security bugs to LLVM 
 
Do you maintain significant downstream LLVM changes?

no
 
Do you package and deploy LLVM for others to use (if so, to how many people)?

not my team
 
Is your LLVM distribution based on the open-source releases?

no
 
How often do you usually deploy LLVM?

In some ecosystems LLVM is deployed ~ every two-three weeks. 
In others it takes months. 
 
How fast can you deploy an update?

For some ecosystems we can turn around in several days. 
For others I don't know.  
 
Does your LLVM distribution handle untrusted inputs, and what kind?

Third party OSS code that is often pulled automatically. 
 
What’s the threat model for your LLVM distribution?

Speculating here. I am not a real security expert myself
* A developer getting a bug report and running clang/llvm on the "buggy" input, compromising the developer's desktop. 
* A major opensource project is compromised and it's code is changed in a subtle way that triggers a vulnerability in Clang/LLVM.
  The opensource code is pulled into an internal repo and is compiled by clang, compromising a machine on the build farm. 
* A vulnerability in a run-time library, e.g. crbug.com/606626 or crbug.com/994957
* (???) Vulnerability in a LLVM-based JIT triggered by untrusted bitcode. <2-nd hand knowledge>
* (???) an optimizer introducing a vulnerability into otherwise memory-safe code (we've seen a couple of such in load & store widening)
* (???) deficiency in a hardening pass (CFI, stack protector, shadow call stack) making the hardening inefficient.   

My 2c on the policies: if we actually treat some area of LLVM security-critical, 
we must not only ensure that a reported bug is fixed, but also that the affected component gets
additional testing, fuzzing, and hardening afterwards. 
E.g. for crbug.com/994957 I'd really like to see a fuzz target as a form of regression testing.

Thanks, this is great stuff!


--kcc 
 

On Sat, Nov 16, 2019 at 8:23 AM JF Bastien via llvm-dev <[hidden email]> wrote:

Hello compiler enthusiasts,


The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.


A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
  2. On this list: Do you agree with the goals listed in the proposal?
  3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
    2. Are you involved with security aspects of LLVM (if so, which)?
    3. Do you maintain significant downstream LLVM changes?
    4. Do you package and deploy LLVM for others to use (if so, to how many people)?
    5. Is your LLVM distribution based on the open-source releases?
    6. How often do you usually deploy LLVM?
    7. How fast can you deploy an update?
    8. Does your LLVM distribution handle untrusted inputs, and what kind?
    9. What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:
  1. Yes! We should create a security group and process.
  2. We agree with the goals listed.
  3. We think the proposal is exactly right, but would like to hear the community’s opinions.
  4. Here’s how we approach the security of LLVM:
    1. I contribute to LLVM as an Apple employee.
    2. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
    3. We maintain significant downstream changes.
    4. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
    5. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
    6. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
    7. This depends on which release of LLVM is affected.
    8. Yes, our distribution sometimes handles untrusted input.
    9. The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
In reply to this post by Jonas Paulsson via llvm-dev
Dimitry had a pretty comprehensive reply for FreeBSD, but I want to
expand on one thing:

On Thu, 5 Dec 2019 at 13:45, Dimitry Andric <[hidden email]> wrote:
>
> On this list: Do you agree with the goals listed in the proposal?
>
> Yes, but I hope we can clarify what "time to investigate" and "timely notification" means, in more precise terms.

Other replies in the thread touched on this but I want to again
higlight that we should make sure we are clear about what is and is
not in scope for the team. Perhaps explicitly positioning this as an
"LLVM SIRT" or similar rather than a "security team" to indicate that
the focus is vulnerability response. Issues or discussions that are
security-related but do not need to be handled in confidence don't
require this process, but folks may send such issues to a "security
team" (as happens on occasion with the FreeBSD security team).
_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
In reply to this post by Jonas Paulsson via llvm-dev
Hi folks!

I want to ping this discussion again, now that the holidays are over. I’ve updated the patch to address the comments I’ve received.

Overall it seems the feedback is positive, with some worries about parts that aren’t defined yet. I’m trying to get things started, so not everything needs to be defined yet! I’m glad folks have ideas of *how* we should define what’s still open.


Thanks,

JF


On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <[hidden email]> wrote:

Hello compiler enthusiasts,


The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.


A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
  2. On this list: Do you agree with the goals listed in the proposal?
  3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
    2. Are you involved with security aspects of LLVM (if so, which)?
    3. Do you maintain significant downstream LLVM changes?
    4. Do you package and deploy LLVM for others to use (if so, to how many people)?
    5. Is your LLVM distribution based on the open-source releases?
    6. How often do you usually deploy LLVM?
    7. How fast can you deploy an update?
    8. Does your LLVM distribution handle untrusted inputs, and what kind?
    9. What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:
  1. Yes! We should create a security group and process.
  2. We agree with the goals listed.
  3. We think the proposal is exactly right, but would like to hear the community’s opinions.
  4. Here’s how we approach the security of LLVM:
    1. I contribute to LLVM as an Apple employee.
    2. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
    3. We maintain significant downstream changes.
    4. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
    5. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
    6. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
    7. This depends on which release of LLVM is affected.
    8. Yes, our distribution sometimes handles untrusted input.
    9. The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
Hi JF,

 Answering your question  both as an individual and with a red hat:
 
> Should we create a security group and process?

Yes! That's a good starter, and some bits of formalization are likely to be beneficial.

> Do you agree with the goals listed in the proposal?

Yes.

> At a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

I like the non-intrusive coordination aspect. It also helps to have a group to speak with for responsible disclosure.

The dispatch mechanism to actual developers is unclear. Do they need to be part of the group? How are thy contacted / based on which criteria?

 
> Our approach to this issue:

> 1. Are you an LLVM contributor (individual or representing a company)?

yes and yes (Red Hat)


> 2. Are you involved with security aspects of LLVM (if so, which)?

In the past: yes, building an obfuscating compiler based on LLVM.
In my current role: yes, trying to implement / catch-up with some of the gcc hardening feature clang doesn't have (e.g. -fstack-clash-protection and _FORTIFY_SOURCE improvement recently)

 
> 3. Do you maintain significant downstream LLVM changes?

We're trying to have as few patches as possible, so that's a small yes.

 
> 4. Do you package and deploy LLVM for others to use (if so, to how many people)?

Yes (Fedora and RHEL)

> 5. Is your LLVM distribution based on the open-source releases?

Yes., with a larger delay for RHEL.

 
> 6. How often do you usually deploy LLVM?

At least one for each Major and minor update (Fedora) and then backports + RHEL.

> 7. How fast can you deploy an update?

For fedora, it can be a matter of days. For RHEL it takes longer but it can be ~ 1 week.
 

> 8.Does your LLVM distribution handle untrusted inputs, and what kind?
> 9. What’s the threat model for your LLVM distribution?

I don't think we have something specific to LLVM in the threat model, especially as gcc is the system compiler for both distributions.

--
Serge

On Wed, Jan 8, 2020 at 6:36 AM JF Bastien via llvm-dev <[hidden email]> wrote:
Hi folks!

I want to ping this discussion again, now that the holidays are over. I’ve updated the patch to address the comments I’ve received.

Overall it seems the feedback is positive, with some worries about parts that aren’t defined yet. I’m trying to get things started, so not everything needs to be defined yet! I’m glad folks have ideas of *how* we should define what’s still open.


Thanks,

JF


On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <[hidden email]> wrote:

Hello compiler enthusiasts,


The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.


A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
  2. On this list: Do you agree with the goals listed in the proposal?
  3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
    2. Are you involved with security aspects of LLVM (if so, which)?
    3. Do you maintain significant downstream LLVM changes?
    4. Do you package and deploy LLVM for others to use (if so, to how many people)?
    5. Is your LLVM distribution based on the open-source releases?
    6. How often do you usually deploy LLVM?
    7. How fast can you deploy an update?
    8. Does your LLVM distribution handle untrusted inputs, and what kind?
    9. What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:
  1. Yes! We should create a security group and process.
  2. We agree with the goals listed.
  3. We think the proposal is exactly right, but would like to hear the community’s opinions.
  4. Here’s how we approach the security of LLVM:
    1. I contribute to LLVM as an Apple employee.
    2. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
    3. We maintain significant downstream changes.
    4. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
    5. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
    6. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
    7. This depends on which release of LLVM is affected.
    8. Yes, our distribution sometimes handles untrusted input.
    9. The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] [RFC] LLVM Security Group and Process

Jonas Paulsson via llvm-dev
On behalf of the board, I'd like to acknowledge that given the growing usage of LLVM in wildly different areas, having some structure or process to address security aspects is important, if not critical, for the health and success of the LLVM project as a whole.

The board will fully support this group, but will not "run" it, as this does not fall in the Foundation's remits.

We believe this is mostly an entity thing (companies, distributions, ...), and these are notoriously slow to react. It has to interact with their own security groups and their internal processes (SDL, ...) ; the usually active people on the mailing list are not necessarily the ones interested in this topic.

Each security advisory being very specific (spectre is quite different from stack protection), plus the LLVM projects spectrum growing overtime (f18, mlir, libc, ...) makes us think that the people in that group are rather well identified security aware / knowledgeable / trusted contacts points in the entities (and used to deal with coordination amongst entities) rather than deep technical experts (the former is mandatory, the second is nice to have). Actual technical experts spot-on the advisory under work will need to be brought in on a need be basis by the security group. The board believes the real benefit with this group is the coordination of the security fix investigation and deployment amongst the different community entity-members.

Finally, we believe it's best to begin with a small & motivated group, laying the foundations, and then extend it on a need be basis.

On behalf of the board, I'd like to invite those who think their entity should care about this proposal to prod the relevant person(s) in their entity to comment on this proposal, preferably on the mailing list or phabricator, but worst case directly to JF or myself.

Once we have some more comments / feedback, we can think of committing this policy, and forming an initial group.

Kind regards,
Arnaud


From: Serge Guelton via llvm-dev <[hidden email]>
Date: Thu, Jan 9, 2020 at 4:55 PM
Subject: Re: [llvm-dev] [RFC] LLVM Security Group and Process
To: JF Bastien <[hidden email]>
Cc: llvm-dev <[hidden email]>


Hi JF,

 Answering your question  both as an individual and with a red hat:
 
> Should we create a security group and process?

Yes! That's a good starter, and some bits of formalization are likely to be beneficial.

> Do you agree with the goals listed in the proposal?

Yes.

> At a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?

I like the non-intrusive coordination aspect. It also helps to have a group to speak with for responsible disclosure.

The dispatch mechanism to actual developers is unclear. Do they need to be part of the group? How are thy contacted / based on which criteria?

 
> Our approach to this issue:

> 1. Are you an LLVM contributor (individual or representing a company)?

yes and yes (Red Hat)


> 2. Are you involved with security aspects of LLVM (if so, which)?

In the past: yes, building an obfuscating compiler based on LLVM.
In my current role: yes, trying to implement / catch-up with some of the gcc hardening feature clang doesn't have (e.g. -fstack-clash-protection and _FORTIFY_SOURCE improvement recently)

 
> 3. Do you maintain significant downstream LLVM changes?

We're trying to have as few patches as possible, so that's a small yes.

 
> 4. Do you package and deploy LLVM for others to use (if so, to how many people)?

Yes (Fedora and RHEL)

> 5. Is your LLVM distribution based on the open-source releases?

Yes., with a larger delay for RHEL.

 
> 6. How often do you usually deploy LLVM?

At least one for each Major and minor update (Fedora) and then backports + RHEL.

> 7. How fast can you deploy an update?

For fedora, it can be a matter of days. For RHEL it takes longer but it can be ~ 1 week.
 

> 8.Does your LLVM distribution handle untrusted inputs, and what kind?
> 9. What’s the threat model for your LLVM distribution?

I don't think we have something specific to LLVM in the threat model, especially as gcc is the system compiler for both distributions.

--
Serge

On Wed, Jan 8, 2020 at 6:36 AM JF Bastien via llvm-dev <[hidden email]> wrote:
Hi folks!

I want to ping this discussion again, now that the holidays are over. I’ve updated the patch to address the comments I’ve received.

Overall it seems the feedback is positive, with some worries about parts that aren’t defined yet. I’m trying to get things started, so not everything needs to be defined yet! I’m glad folks have ideas of *how* we should define what’s still open.


Thanks,

JF


On Nov 15, 2019, at 10:58 AM, JF Bastien via llvm-dev <[hidden email]> wrote:

Hello compiler enthusiasts,


The Apple LLVM team would like to propose that a new a security process and an associated private LLVM Security Group be created under the umbrella of the LLVM project.


A draft proposal for how we could organize such a group and what its process could be is 
available on Phabricator. The proposal starts with a list of goals for the process and Security Group, repeated here:

The LLVM Security Group has the following goals:
  1. Allow LLVM contributors and security researchers to disclose security-related issues affecting the LLVM project to members of the LLVM community.
  2. Organize fixes, code reviews, and release management for said issues.
  3. Allow distributors time to investigate and deploy fixes before wide dissemination of vulnerabilities or mitigation shortcomings.
  4. Ensure timely notification and release to vendors who package and distribute LLVM-based toolchains and projects.
  5. Ensure timely notification to users of LLVM-based toolchains whose compiled code is security-sensitive, through the CVE process.

We’re looking for answers to the following questions:
  1. On this list: Should we create a security group and process?
  2. On this list: Do you agree with the goals listed in the proposal?
  3. On this list: at a high-level, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  4. On the Phabricator code review: going into specific details, what do you think should be done differently, and what do you think is exactly right in the draft proposal?
  5. On this list: to help understand where you’re coming from with your feedback, it would be helpful to state how you personally approach this issue:
    1. Are you an LLVM contributor (individual or representing a company)?
    2. Are you involved with security aspects of LLVM (if so, which)?
    3. Do you maintain significant downstream LLVM changes?
    4. Do you package and deploy LLVM for others to use (if so, to how many people)?
    5. Is your LLVM distribution based on the open-source releases?
    6. How often do you usually deploy LLVM?
    7. How fast can you deploy an update?
    8. Does your LLVM distribution handle untrusted inputs, and what kind?
    9. What’s the threat model for your LLVM distribution?

Other open-source projects have security-related groups and processes. They structure their group very differently from one another. This proposal borrows from some of these projects’ processes. A few examples:
When providing feedback, it would be great to hear if you’ve dealt with these or other projects’ processes, what works well, and what can be done better.


I’ll go first in answering my own questions above:
  1. Yes! We should create a security group and process.
  2. We agree with the goals listed.
  3. We think the proposal is exactly right, but would like to hear the community’s opinions.
  4. Here’s how we approach the security of LLVM:
    1. I contribute to LLVM as an Apple employee.
    2. I’ve been involved in a variety of LLVM security issues, from automatic variable initialization to security-related diagnostics, as well as deploying these mitigations to internal codebases.
    3. We maintain significant downstream changes.
    4. We package and deploy LLVM, both internally and externally, for a variety of purposes, including the clang, Swift, and mobile GPU shader compilers.
    5. Our LLVM distribution is not directly derived from the open-source release. In all cases, all non-upstream public patches for our releases are available in repository branches at https://github.com/apple.
    6. We have many deployments of LLVM whose release schedules vary significantly. The LLVM build deployed as part of Xcode historically has one major release per year, followed by roughly one minor release every 2 months. Other releases of LLVM are also security-sensitive and don’t follow the same schedule.
    7. This depends on which release of LLVM is affected.
    8. Yes, our distribution sometimes handles untrusted input.
    9. The threat model is highly variable depending on the particular language front-ends being considered.
Apple is involved with a variety of open-source projects and their disclosures. For example, we frequently work with the WebKit community to handle security issues through their process.


Thanks,

JF


_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
[hidden email]
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
12