Mozilla Security Metrics Project

Mozilla has been working with security researcher and analyst Rich Mogull for a few months now on a project to develop a metrics model to measure the relative security of Firefox over time. We are trying to develop a model that goes beyond simple bug counts and more accurately reflects both the effectiveness of secure development efforts, and the relative risk to users over time. Our goal in this first phase of the project is to build a baseline model we can evolve over time as we learn what works, and what does not. We do not think any model can define an absolute level of security, so we decided to take the approach of tracking metrics over time so we can track relative improvements (or declines), and identify any problem spots.  This information will support the development of Mozilla projects including future versions of Firefox.

Below is a summary of the project goals, and the xls of the model is posted at http://securosis.com/publications/MozillaProject2.xls.  The same content as a set of .csvs is available here: http://securosis.com/publications/MozillaProject.zip [Update] There also a copy for OpenOffice: http://securosis.com/publications/MozillaProject2.ods

This is a preliminary version and we are currently looking for feedback. The final version will be a far more descriptive document, but for now we are using a spreadsheet to refine the approach. Feel free to download it, rip it apart, and post your comments. This is an open project and process.  Eventually we will release this to the community at large with the hope that other organizations can adapt it to their own needs.

We would love to get your opinions on this, and if you are not comfortable commenting here you can mail Rich directly at rmogull@securosis.com.  When we have reviewed the feedback, we will post here with findings and continue the effort with your help.

Project Mission:
To develop a metrics based model to track the relative security of Firefox, evaluate the effectiveness of security efforts within the development and testing process, and measure the window of exposure of Firefox users to security vulnerabilities.

Secondary mission:
To develop an open base model that can be standardized and expanded upon for other software development efforts to achieve the same goals.

Detailed goals:
1. Track security trends in the development of Firefox.
2. Measure the effectiveness of various tools, stages and techniques of secure development.
3. Measure the exposure window when new vulnerabilities are discovered- the time to get x% of the user base protected. Will include sub-metrics to measure the efficiency of the process, from initial response, through patch generation, through user base updated.  Correlate by severity of vulnerability.

15 comments on “Mozilla Security Metrics Project”

  1. Arthur wrote on

    This looks like a great starting point. I’m really looking forward to seeing some of these pages filled in and for the data sets to be large enough for serious analysis. Especially for the development metrics. It’s high time that we started getting public discussion of how bad (or good) a major piece of software is.

  2. Richard Lloyd wrote on

    I’m a little surprised that an Open Source organisation like Mozilla provides the project goals for their Security Metrics as a spreadsheet in a proprietary format (.xls) rather than something like Open Document Format (.ods). What’s worse is that if you try to use the main Open Source office suite out there (OpenOffice.org) to open the .xls file, the formatting is all over the place 🙁

  3. Ariel wrote on

    My pieces:
    Great! This information will be very helpful. There are a few aspects of the security problem that are worth considering. First, there is the problem of assessing the impact of (multiple) concurrent vulnerabilities, e.g., attacks that take profit of two vulnerabilities that haven’t been patched at a given moment. In fact, it would be interesting considering also the combination of a vulnerability in the browser and another one in the OS, mail client, or plugin, etc. After all, the browser is one of the most popular attack vectors today! In this sense, what I am expecting from this is to get an understanding of the impact of a vulnerability. I guess you could add a column with options such as, compromise OS with the user’s privileges, hijack passwords and credentials, etc.

    Second, I think that it would be beneficial to include similar information about other browsers. Of course, there are some facts that cannot be determined without source code and full details into vulnerabilities, but yet, it would help to determine the usefulness of the metric (e.g., we’d expect that our beliefs are matched by the statistical results).

    Although it is a bit of a long shot, I’d suggest using statistical information to get rid of “noise.” Say, a new exploit is discovered in the wild, or even a new vulnerability is disclosed to Firefox’s team, but you don’t know how long has this vulnerability been known. On the other hand, if there are some vulnerabilities –of this same type– for which you do have more accurate information (i.e., dates when it was first discovered), then you could used this information to infer when was the former vulnerability discovered. More importantly, I don’t think it is wise to classify bugs as 0-day or “disclosed before patched,” since this only addresses the problem partially. What you really want to know is if the bug was being exploited before the patch was released (and this is difficult to know, right?)

    Cheers,
    Ariel

  4. Karthik Kannan wrote on

    The issue with the Total Exposure Window (TEW) is that it depends on a parameter of user base (90% of the user base is to be patched).

    Two points on the user base:
    1. It cannot be based on number of downloads as a software can be downloaded or through web site visit statistics. If used, it should use some almost reliable form such as registration or an automatic key generated.
    2. Patching 90% of the user base quotient is not feasible as there is an element of manual intervention by the user to acknowledge the latest patch check/download/install.

  5. Ben Bucksch wrote on

    (Agreed with the format complaint. Please put this on a webpage.)

    Start day could be:
    1. Day when the bug was introduced (probably ship date of first vulnerable version).
    Rationale: You never know who found the bug and didn’t tell. NSA, computer mafia, whatever.
    2. Day when the first known human got aware of the bug.
    Rationale: This is when we know it could have been exploited. Even if the finder is good-intended, there can always be leaks, of technical and human nature, from black hats and governments.
    3. Day when the bug was reported to the vendor.
    (Personally, I think this is irrelevant, but will be favored by vendors.)
    4. Day when the bug was published to the public.
    (If the so-called “Responsible Disclosure” is followed, this would mean the “Window of Exposure” is always 0 days, which I think does not properly reflect the danger. There are some down-stream vendors which are pathetically slow, though, and they manage to wait *months* after public availability of official patches until they ship binary fixes.)

    I would take them weighted: 1. with 10%, 2. with 100% and 4. with 1000%.

    End date could be:
    1. Date when patch in source form was available.
    2. Date when patch in binary form was available to public.
    3. Date when patch in easily installable form was available to public.
    4. Date when 90% (or x%) of users have been patched/updated.

    I would take 3. and/or 4.

    Severity could be (it’s hard to make categories for this, as it’s not even linear):
    1. Compromise of user account or system. Can read/write all the user’s data. Same as executung binaries and being able to do anything that the user himself can do. Can steal diaries, love letters/emails/sms, private photos, all passwords stored etc.pp..
    2. Takeover of application, but other apps and data not affected. E.g. Cross-domain exploits which don’t affect local files.
    3. Major loss of privacy, e.g. browsing history / all URLs visited.
    4. Minor loss of privacy, e.g. can detect that I have been on a certain site.

    I’d make separate indices for each of them, as they have vastly different implications.

  6. rmogull wrote on

    Richard,

    We also released the model as a set of csv files for people that don’t want to run Excel. The final version will be released in a bunch of different formats to meet the needs of different users.

  7. rmogull wrote on

    Ariel,

    We’re hoping that the “security type” metric will take care of your first suggestion. Examples we’ll include there are remote code execution, privilege escalation, credential compromise, etc. Is that what you’re looking for?

    As for comparing to other browsers, that’s not really the goal of the project. We’re more internally focused. I don’t thinks these kinds of models work well when applied externally, they need to be adopted by whoever makes the software.

    Finally, the exploit question is a tough one. We looked hard at that and couldn’t find any accurate metrics, so we had to rely more on vulnerabilities.

    Thanks for the feedback- did this address your questions? Still think we need an additional category or 2?

  8. rmogull wrote on

    Karthik,

    I agree we may need to modify the 90% number. That was just a rough placeholder I think we’ll need to look at closely before finalizing on it.

    In terms of how Mozilla measures updates, it’s tied to the auto update mechanism in the browser. The second half of this post provides a good overview of where the metric comes from:

    http://john.jubjubs.net/2007/11/27/mozilla-firefox-market-share/

    What do you think?

  9. blueget wrote on

    XLS? WTF? Why is the information published in a proprietary, closed Microsoft Format, for that you need to buy costy, proprietary software that doesn’t even run on Linux?

    Shame on Mozilla for that!

  10. blueget wrote on

    And NO, CSV is *not* an alternative. It’s not about “people who don’t want to run excel”. It’s about data accessibility and open formats.

    You also have no chance of saying you were not able to publish it as .ods, since there is the free ODF plugin for MS Office (see http://www.sun.com/software/star/odf_plugin/index.jsp)

  11. Window Snyder wrote on

    There is a OpenOffice version available now at: http://securosis.com/publications/MozillaProject2.ods

  12. Dimitri wrote on

    The XLS file seems like a very comprehensive list of measurements. At the same time, it is not clear to me how we are going to relate these to standard models of vulnerability analysis such as DREAD or STRIDE that attach semantic meaning to vulnerability measurements. See:

    http://www.owasp.org/index.php/Threat_Risk_Modeling

    It’s also not clear to me that we are not trying to achieve very similar goals as those proposals. Did you guys consider adapting them?

  13. Ariel wrote on

    Thanks for the answer, Rich. Yes, the security type column would take care of my first suggestion provided a careful security analysis is made. And, of course, security metric information for other browsers is not required, but it would help to assess the metrics you devised.

    About the window-of-exposure metric, it is worth understanding what do you suppose to get from that. (I provide a suggestion below.)

    It is realistic to assume that some vulnerabilities will be, and have been, discovered outside of Firefox development team’s eyes. This is obvious when an exploit is found in the wild, before Firefox’s team is alerted, but also when they are not. You could use the information on when was the vulnerability introduced to the code and when did Firefox’s development team learn about it (i.e., the difference between the two dates), once a fare amount of these have been recorded you might be able to see a probability distribution (e.g., a Gauss bell). I suspect that this distribution will tell you a lot about when are these vulnerabilities discovered, might allow you to classify vulnerabilities by their difficulty of being spot, etc.

    An application of this would be: assume that vulnerabilities take 12 weeks to be discovered in the average with a standard deviation of 1 week, assume that Firefox’s team discovered a vulnerability 8 weeks after introducing it, then you know that the patch should be developed fast (and taking 2 weeks to do that might be too much!)

    Cheers.

  14. jim carter wrote on

    Change the font size from 10 to 8 and readability improves in the columns.

  15. George wrote on

    Dear Friends,
    Please help me ,I have Mozilla Firefox 3.0.6
    After recent updating I heve nagging information:

    “Password Bank has found that Firefox was uninstalled from this computer. Please confirm following elevation request to remove orphan Password Bank support”.

    After continuing this information the next step is to install strange program:
    “Browser support installation for Password Bank UPEK Inc.

    Please, tell me, what is it this nagging progra ready for installation?

    George