(original) (raw)

[replying to both Ping and Michael in the same email]

On 7/6/06, Michael Chermside <mcherm@mcherm.com> wrote:
Ka-Ping Yee writes:
> i'm starting to think
> that it would be good to clarify what kinds of threats we are

> trying to defend against, and specify what invariants we are
> intending to preserve.

Yes!

> So here are a couple of questions for clarification (some with my
> guesses as to their answers):


Okay, I'll throw in my thoughts also.

> 1.  When we say "restricted/untrusted/<whatever> interpreter" we
>     don't really mean that the *interpreter* is untrusted, right?
>     We mean that the Python code that runs in that interpreter is

>     untrusted (i.e. to be prevented from doing harm), right?

Agreed. My interpretation of the proposal was that interpreters
were either "sandboxed" or "trusted". "Sandboxed" means that there

are security restrictions imposed at some level (perhaps even NO
restrictions). "Trusted" means that the interpreter implements
no security restrictions (beyond what CPython already implements,
which isn't much) and thus runs faster.


Yep.

> 2.  I'm assuming that the implementation of the Python interpreter
>     is always trusted

Sure... it's got to be.

Yep.

> What do
> we take the Trusted Computing Base to include?  The Python VM
> implementation -- plus all the builtin objects and C modules?
> Plus the whole standard library?

My interpretation of Brett's proposal is that the CPython developers
would try to ensure that Python VM had no "security holes" when
running in sandboxed mode. Of course, we also "try" to ensure no
crashes are possible also, and while we're quite good, we're not
perfect.

Beyond that, all pure-python modules with source available (whether
in the stdlib or not) can be "trusted" because they run in a
sandboxed VM. All C modules are \*up to the user\*. Brett proposes
to provide a default list of useful-but-believed-to-be-safe modules
in the stdlib, but the user can configure the C-module whitelist
to whatever she desires.

Michael has it on the money.

> 3.  Is it part of the plan that we want to protect Python code from
>     other Python code?  For example, should a Python program/function
>     X be able to say "i want to launch/call program/function Y with
>     \*these\* parameters and have it run under \*these\* limitations?"
>     This has a big impact on the model.

Now \*that\* is a good question. I would say the answer is a partial
"no", because there are pieces of Brett's security model that are
tied to the interpreter instance. Python code cannot launch another
interpreter (but perhaps it \*should\* be able to?), so it cannot
modify those restrictions for new Python code it launches.

However, I would rather like to allow Python code to execute other
code with greater restrictions, although I would accept all kinds
of limitations and performance penalties to do so. I would be
satisfied if the caller could restrict certain things (like web
and file access) but not others (like memory limits or use of
stdout). I would satisfied if the caller paid huge overhead costs
of launching a separate interpreter -- heck, even a separate
process. And if it is willing to launch a separate process, then
Brett's model works just fine: allow the calling code to start
a new (restricted) Python VM.

The plan is that there is no sandboxed eval() that runs unsafe code from a trusted interpreter within its namespace.  I hope to provide Python code access to running a sandboxed interpreter where you can pass in a string to be executed, but the namespace for that sandboxed interpreter will be fresh and will not carry over in any way from the trusted interpreter.

> We want to be able to guarantee that...
>
>   A.  The interpreter will not crash no matter what Python code
>       it is given to execute.

Agreed. We already want to guarantee that, with the caveat that the
guarantee doesn't apply to a few special modules (like ctypes).

Right, which is why I have been trying to plug the various known crashers that do not rely upon a specific extension module from being imported.

>  B.  Python programs running in different interpreters embedded
>       in the same process cannot communicate with each other.

I don't want to guarantee this, does someone else? It's
astonishingly hard... there are all kinds of clever "knock on the
walls" tricks. For instance, communicate by varying your CPU
utilization up and down in regular patterns.

I'd be satisfied if they could pass information (perhaps even
someday provide a library making it \*easy\* to do so), but could
not pass unforgable items like Python object references, open file
descriptors, and so forth.

Or at least cannot communicate without explicit allowances to do so.

As for knocking on the walls, if you protect access to that kind of information well, it shouldn't be a problem.

>   C.  Python programs running in different interpreters embedded
>       in the same process cannot access each other's Python objects.

I strengthen that slightly to all "unforgable" items, not just
object references.

I would change that to add the caveat that what is exposed by a C extension module attribute will be shared.  That is an implementation detail of multiple interpreters.

>   D.  A given piece of Python code cannot access or communicate
>       with certain Python objects in the same interpreter.
>
>   E.  A given piece of Python code can access only a limited set
>       of Python objects in the same interpreter.

Hmmm. I'm not sure.

Not quite sure what you are getting at here, Ping.  Are you saying to run code within an interpreter (sandboxed and not) and restricted even more beyond what the interpreter has been given by the security settings?

These emails have convinced me to add a "Threat Model" section for the next draft of the design doc.

-Brett

\-- Michael Chermside