Continuing in the series of “common real world Manta questions”, another common
one we hear a lot, at least from newcomers, is how to run
node with node_modules
. If you’re not familar with
node, it’s basically the same problem as using ruby gems, perl modules, python
eggs, etc. You have a particular VM, and a particularly built set of add-ons
that go with it, and you want all of that available to run your program.
This post is going to walk you though writing a node program that uses several
add-on modules (including native ones) to accomplish a real world task, which is
an ad-hoc comparison of how
Google’s Compact Language Detector
compares to the “language” attribute that exists on
tweets. Because tweets
are obviously very small, I was actually just curious myself how well this
worked, so I built a small node
script that uses extra npm
modules to test
this out.
If you’re not familiar yet with Manta, Manta is an object store with a twist: you can run compute in-situ on objects stored there. While the compute environment comes preloaded with a ton of standard utilities and libraries, sometimes you need custom code that isn’t available, or is customized in some way. To accomplish this you leverage two Manta concepts: assets and init; often these two are used together, as I will show you here.
The gist is that you create an asset of your necessary code and upload it as an
object to Manta. When you submit a compute job, you specify the path to that
object as an asset
and Manta will automatically make it available to you in
your compute environment on the filesystem, under /assets/$MANTA_USER/...
.
While you can then just unpack it as part of your exec
line, this is actually
fairly heavyweight, as exec
gets run on every input object (recall that if
Manta can, it will optimize by not evicting you from the virtual machine between
every object). init
allows you to run this once for the the full slice of
time you get in the compute container.
Since most of the twitter datasets are for-pay, I needed to get a sample dataset
up. I wrote a small node2manta
daemon that just buffers up tweets into 1GB
files locally and then pushes those up into Manta under a date-based folder.
Beyond pointing you at the source (below, and the only npm module in play
besides manta
is twit), I won’t go into any
detail as it’s pretty straight-forward. In the scheme I used, we have 1 GB of
tweets per object under the scheme /$MANTA_USER/stor/twitter/$DATE.json
. Note
you need a twitter developer account and application
to fill in the ...
credentials in this snippet.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
|
After running that script for a while, I had this:
$ mls /$MANTA_USER/stor/twitter
2013-07-23T00:11:23.772Z.json
2013-07-23T01:47:49.732Z.json
2013-07-23T03:18:16.774Z.json
2013-07-23T04:49:40.730Z.json
2013-07-23T06:41:58.752Z.json
2013-07-23T09:03:19.772Z.json
2013-07-23T11:20:00.741Z.json
2013-07-23T13:07:43.800Z.json
2013-07-23T14:37:33.797Z.json
2013-07-23T16:03:44.764Z.json
2013-07-23T17:36:13.063Z.json
Ok so we’ve got some data, now it’s time to write our map script. In this case
I’m going to develop the entire workflow out of Manta using
mlogin. If you’ve not seen
mlogin
before it’s basically the REPL of Manta. mlogin
allows you to login
to a temporary compute container with one of your objects mounted. This is
actually critical for us in building an asset with node_modules
as we need an
OS environment (i.e., compilers, shared libraries, etc) that matches what our
code will run on. So, I just fired up mlogin
, and setup my project with npm
(the export HOME
bit is only to make
gyp happy). Then I just hacked out a
script in the manta VM by prototyping with this:
$ mlogin /mark.cavage/stor/twitter/2013-07-23T00:11:23.772Z.json
mark.cavage@manta # export HOME=/root
mark.cavage@manta # cd $HOME
mark.cavage@manta # npm install cld
mark.cavage@manta # emacs lang.js
mark.cavage@manta # head -1 $MANTA_INPUT_FILE | node lang.js
And the script I ended up with was:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
|
So every tweet gets mapped to a 3 column output of $match $cld $twitter
, which
we can reduce on. Anyway, now that we’ve got this coded up, let’s tar and save
into manta (again, from the mlogin
session):
mark.cavage@manta # tar -cf tweet_lang_detect.tar lang.js node_modules
mark.cavage@manta # mput -f tweet_lang_detect.tar /$MANTA_USER/stor
...avage/stor/tweet_lang_detect.tar ==========================>] 100% 14.00MB
mark.cavage@manta #
To be pedantic while we’re here, we’ll go ahead an write the reduce step as well, even though it’s trivial. I’m just going to output two numbers, the first being the number of matches, and the second being the total dataset size. Note the reduce line below uses maggr, which is just a simple “math utility” Manta provides for common summing/average operations. Other uses report success using crush-tools. Use what you like, that’s the power of Manta :)
mark.cavage@manta # head -10 $MANTA_INPUT_FILE | node lang.js | maggr -c1='sum,count'
6,10
mark.cavage@manta #
So given 10 inputs, we’ve got a 60% success rate with cld
. Let’s see how it
does on a larger sample set.
You can now exit the mlogin
session, we’re ready to rock.
Ok, so to recap, we hacked up a map/reduce script with an asset using
mlogin
, and now we want to run a job on our dataset. Twitter
throttles your ability to suck their feed pretty aggressively, so by
the time I wrote this blog I only had 11GB of data. That said, they’re
just text files, so that should be a fairly large number of tweets.
Let’s see how it does:
$ mfind -t o /$MANTA_USER/stor/twitter | \
mjob create -o -s /$MANTA_USER/stor/tweet_lang_detect.tar \
--init 'tar -xf /assets/$MANTA_USER/stor/tweet_lang_detect.tar' \
-m 'node lang.js' \
-r 'maggr -c1="sum,count"'
added 11 inputs to f7af6bcf-2126-4b1d-b9d5-c0f25a162786
2610121,3860111
How did I figure out what the -s
and --init
options should be you ask?
Simple, I ran mlogin
again with -s
specified, and tested out what my untar
line should be.
Side point, if you’re interested, this initial prototype took 1m19s to run. An enterprising individual would likely be able to cut that (at least) in half by “pre-reducing” as part of the map phase; in my case, ~1m latency was fine, because I’m lazy. Also, the entire time it took me to prototype this from no code to actually having my answer was about 20m (not counting the time it took to ingest data – I just ran that overnight).
We’re actually done now. Clearly you could go figure out more interesting
statistics here, but really I just wanted to quickly see what cld
did on a
reasonable dataset (I had ~3.8M tweets); it turned out surprisingly close to my
original prototype of 60% (~67% accurate on tweets).
Also, while this example used node
to illustrate a real world “custom code”
problem, the same technique would apply to python
, ruby
, etc.; you need
to get your “tarball” built up in the same way, and just push it back into
Manta for future jobs to consume.
Hopefully that helps!
]]>One of the very first questions that came up after launching Manta was how to have a browser directly upload an object into Manta. There are several reasons you would want this, but most obvious is that it allows clients to bypass your web server(s), which (1) reduces your bandwidth costs, and (2) reduces client latency. Unfortunately the browser security model is fairly complicated here as CORS is brought into play. To answer this question, I created a small (and ugly!) sample application to illustrate how to accomplish this. This blog post will explain the tricky pieces of the sample application; it will not stand alone without looking at the sample code.
There seems to be some general confusion around what
signed urls are used
for. Basically, signed URLs are a way for you to hand out an expiring “ticket”
that allows somebody else to see a single, private Manta object. For example,
suppose I have an MP3 that I wanted to share with a few friends over email;
rather than putting the file in /:login/public
, and getting sued by the RIAA,
I would place it in /:login/stor/song.mp3
, generate a signed URL to it,
and just send the URL to them.
msign is the command line utility
that will generate a presigned URL, but in this example we’ll be generating it
programatically.
CORS, or “Cross-Origin-Resource-Sharing” is a mechanism that allows browsers to access resources that did not originate on the same domain. While functional, it’s very complicated (personally, there is little else in the web world I hate more than CORS); for a gentler introduction than the W3C spec, see the MDN page. Manta fully supports CORS on a per-directory and per-object basis, so that you are empowered to be as restrictive or permissive as you like. To achieve direct upload, you will need to set CORS headers on your directories. In the examples below, I’ve basically set them “wide open.”
The gist is that you are still running some web application, albeit a light one, that a browser does interact with to get “tickets,” that allow the browser to directly write into some object under your control. There are other ways to accomplish doling out “tickets,” but this is the most practical. In the example I made, each browser session gets their own “dropbox,” that the server sets up for them (in reality it would be tied to your webapp’s users, not a session). The browser has some little HTML form, and when the user selects a file and submits the HTML form, the browser asks your webserver for a location to upload. Your webapp generates a Manta signed url, and gives that to the browser. The browser then creates an Ajax request and sends the bytes up. Here’s an illustration of all that text:
Of course the devil is in the details…
In this example I’m using /$MANTA_USER/stor/dropbox
, as the root for uploads.
Note that /:login/stor/
is “private,” so only you can read and write data.
For each browser session that comes in, our webserver creates
/:login/stor/dropbox/:session
(which is just a random number in this example).
When a user selects a file to upload, we send it to
/:login/stor/dropbox/:session/:filename
. If the user uploads the same file
multiple times, it just gets overwritten.
We’ll start with an examination of the important parts of the web server. I used no dependencies in this example so there’s no confusion about which toolkit makes more sense, etc.; it’s all just “straight up” node http. I’m not going to walk through every line of the example application, but instead just give some more context on the particularly tricky parts that may not be clear.
When we see a new user session (to reemphasize again, you assuredly want this based off your user’s name or id or something), we create a “private directory”. Ignoring all the setup and node HTTP stuff, here’s the bits that creates a per-session directory – I’ve slightly modified the example code here to be readable out of context:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
The important aspect to point out here is the header
block we pass into the
options block on mkdir
. When a write request comes into Manta in a “CORS
scenario,” the server honors the CORS settings on the parent directory. So
setting up the requisite CORS headers on the directory we want to write into
allows the browser to go through all the preflight garbage and send the headers
it needs to for uploading an object directly.
This portion is actually pretty straightforward and handled by the
Manta SDK. The only thing of interest
here is that we’re signing a request to the given URL with two methods:
OPTIONS
and PUT
. Normally you’d only hand out a signed URL with one method
signed, but for this case as the browser preflighs the request with the same URL
we need the server side to honor both. Again I’ve slightly modified the example
application code here:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
|
So in the example above, the webapp POST
’d a form to us with the file name
the user wants to upload. A “real app” would want to sanitize that, and stop
CSRF attacks, etc.,
but that’s outside the scope of this little application. Here we blindly sign
it, and spit the URL back to the browser.
At this point we’re basically done with what our little webapp needed to do.
First, a disclaimer: I am awful at client-side code, so please don’t be wed to anything I did here. Anyway, so I made a single HTML page with an upload form and some jQuery pieces: specifically I used their Ajax API where it made sense, along with their Form Helpers. Lastly, so you can see progress information, I stuck in a progress bar.
Ok, enough preamble, let’s see the code!
1 2 3 4 |
|
Yup, that’s a form. What do we do when the user submits? As per our flows above, we first need to request a place to write the file to, so we ask the webserver to sign the file name – note this is “straight up” jQuery and ajax, nothing fancy about this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
At long last, we can now push the raw bytes into Manta. Our own webserver gave
us a URL we can write to for an hour. We now make an
XHR2 request and directly PUT
the
data.
1 2 3 4 5 6 7 8 9 |
|
A few notes:
multipart
framed message, which Manta will not
interpret; meaning you would end up with an object that still has HTTP framing
noise in it.access-control-allow-origin: *
. That’s specifically
so that future GET requests will work from a web browser as well. When
reading an object, the CORS semantics are inferred from the object itself.That’s pretty much it – once the browser completes you can see it using mls
or
whatever other tool you want.
This article explained how to construct a web application that allows clients to directly upload to Manta. It highlighted the relevant portions of a sample application I created that does this using Ajax and XHR Level 2. Comments welcome!
]]>