Thursday, December 12, 2013

The SkyBox camera

Christmas (and Christmas shopping) is upon us, and I have a big review coming up, but I just can't help myself...

SkySat-1, from a local startup SkyBox Imaging, was launched on November 21 on a Russian Dnepr rocket, along with 31 other microsatellites and a package bolted to the 3rd stage.  They have a signal, the satellite is alive, and it has seen first light.  Yeehah!

These folks are using area-array sensors.  That's a radical choice, and I'd like to explain why.  For context, I'll start with a rough introduction to the usual way of making imaging satellites.

A traditional visible-band satellite, like the DubaiSat-2 that was launched along with SkySat-1, uses a pushbroom sensor, like this one from DALSA.  It has an array of 16,000 (swath) by 500 (track) pixels.
The "track" pixel direction is divided into multiple regions, which each handle one color, arranged like this:
Digital pixels are little photodiodes with an attached capacitor which stores charge accumulated by the exposure.  A CCD is a special kind of circuit that can shift a charge from one pixel's capacitor to the next.   CCDs are read by shifting the contexts of the entire array along the track direction, which in this second diagram would be to the right.  As each line is shifted into the readout line, it is very quickly shifted along the swath direction.  At multiple points along the swath there are "taps" where the charge stored is converted into a digital number which represents the brightness of the light on that pixel.

A pushbroom CCD is special in that it has a readout line for each color region.  And, a pushbroom CCD is used in a special way.  Rather than expose a steady image on the entire CCD for tens of milliseconds, a moving image is swept across the sensor in the track direction, and in synchrony the pixels are shifted in the same direction.

A pushbroom CCD can sweep out a much larger image than the size of the CCD.  Most photocopiers work this way.  The sensor is often the full width of the page, perhaps 9 inches wide, but just a fraction of an inch long.  To make an 8.5 x 11 inch image, either the page is scanned across the sensor (page feed), or the sensor is scanned across the page (flatbed).

In a satellite like DubaiSat-2, a telescope forms an image of some small portion of the earth on the CCD, and the satellite is flown so that the image sweeps across the CCD in the track direction.
Let's put some numbers on this thing.  If the CCD has 3.5 micron pixels like the DALSA sensor pictured, and the satellite is in an orbit 600 km up, and has a telescope with a focal length of 3 meters, then the pixels, projected back through that telescope to the ground, would be 70 cm on a side.  We call 70 cm the ground sample distance (GSD).  The telescope might have an aperture of 50cm, which is as big as the U.S. Defense Department will allow (although who knows if they can veto a design from Dubai launched on a Russian rocket).  If so, it has a relative aperture of f/6, which will resolve 3.5 micron pixels well with visible light, if diffraction limited.

The satellite is travelling at 7561 m/s in a north-south direction, but it's ground projection is moving under it at 6911 m/s, because the ground projection is closer to the center of the earth.  The Earth is also rotating underneath it at 400 m/s at 30 degrees north of the equator.  The combined relative velocity is 6922 m/s.  That's 9,900 pixels per second.  9,900 pixels/second x 16,000 pixel swath = 160 megapixels/second.  The signal chain from the taps in the CCD probably will not run at this speed well, so the sensor will need at least 4 taps per color region to get the analog to digital converters running at a more reasonable 40 MHz.  This is not a big problem.

A bigger problem is getting enough light.  If the CCD has 128 rows of pixels for one color, then the time for the image to slide across the column will be 13 milliseconds, and that's the effective exposure time.  If you are taking pictures of your kids outdoors in the sun, with a point&shoot with 3.5 micron pixels, 13 ms with an f/6 aperture is plenty of light.  Under a tree that'll still work.  From space, the blue sky (it's nearly the same blue looking both up and down) will be superposed on top of whatever picture we take, and images from shaded areas will get washed out.  More on this later.

Okay, back to SkySat-1.  The Skybox Imaging folks would like to shoot video of things, as well as imagery, and don't want to be dependent on a custom sensor.  So they are using standard area array sensors rather than pushbroom CCDs.

In order to shoot video of a spot on the ground, they have to rotate the satellite at almost 1 degree/second so that the telescope stays pointing at that one point on the ground.  If it flies directly over that spot, it will take about 90 seconds to go from 30 degrees off nadir in one direction to 30 degrees off in the other direction.  In theory, the satellite could shoot imagery this way as well, and that's fine for taking pictures of, ahem, targets.

A good chunk of the satellite imagery business, however, is about very large things, like crops in California's Central Valley.  To shoot something like that, you must cover a lot of area quickly and deal with motion blur, both things that a pushbroom sensor does well.

The image sliding across a pushbroom sensor does so continuously, but the pixel charges get shifted in a more discrete manner to avoid smearing them all together.  As a result, a pushbroom sensor necessarily sees about 1 pixel of motion blur in the track direction.  If SkySat-1 also has 0.7 meter pixels, and just stared straight down at the ground, then to have the same motion blur it would have to have a 93 microsecond exposure.  That is not enough time to make out a signal from the readout noise.

Most satellites use some kind of Cassegrain telescope, which has two mirrors.  It's possible to cancel the motion of the ground during the exposure by tilting the secondary mirror, generally with some kind of piezoelectric actuator.  This technique is used by the Visionmap A3 aerial survey camera.  It seems to me that it's a good match to SkyBox's light problem.  If the sensor is a interline transfer CCD, then it can expose pictures while the secondary mirror stabilizes the image, and cycle the mirror back while the image is read out.  Interline transfer CCDs make this possible because they expose the whole image array at the same time and then, before readout, shift the charges into a second set of shielded capacitors that do not accumulate charge from the photodiodes.

Let's put some numbers on this thing.  They'd want an interline transfer CCD that can store a lot of electrons in each pixel, and read them out fast.  The best thing I can find right now is the KAI-16070, which has 7.4 micron pixels that store up to 44,000 electrons.  They could use a 6 meter focal length F/12 Cassegrain, which would give them 74 cm GSD, and a ground velocity of 9,350 pixels/sec.

The CCD runs at 8 frames per second, so staring straight down the satellite will advance 865 m or 1170 pixels along the ground.  This CCD has a 4888 x 3256 pixel format, so we would expect 64% overlap in the forward direction.  This is plenty to align the frames to one another, but not enough to substantially improve signal-to-noise ratio (with stacking) or dynamic range (with alternating long and short exposures).

And this, by the way, is the point of this post.  Area array image sensors have seen a huge amount of work in the last 10 years, driven by the competitive and lucrative digital camera market.  16 megapixel interline CCDs with big pixels running at 8 frames per second have only been around for a couple of years at most.  If I ran this analysis with the area arrays of five years ago the numbers would come out junk.

Back to Skybox.  When they want video, they can have the CCD read out a 4 megapixel region of interest at 30 fps.  This will be easily big enough to fill a HDTV stream.

They'd want to expose for as long as possible.  I figure a 15 millisecond exposure ought to saturate the KAI-16070 pixels looking at a white paper sheet in full sun.  During that time the secondary mirror would have to tilt through 95 microradians, or about 20 seconds of arc for those of you who think in base-60.  Even this exposure will cause shiny objects like cars to bloom a little, any more and sidewalks and white roofs will saturate.

To get an idea of how hard it is to shoot things in the shade from orbit, consider that a perfectly white sheet exposed to the whole sky except the sun will be the same brightness as the sky.  A light grey object with 20% albedo shaded from half the sky will be just 10% of the brightness of the sky.  That means the satellite has to see a grey object through a veil 10 times brighter than the object.  If the whole blue sky is 15% as bright as the sun, our light grey object would generate around 660 electrons of signal, swimming in sqrt(7260)=85 electrons of noise.  That's a signal to noise ratio of 7.8:1, which actually sounds pretty good.  It's a little worse than what SLR makers consider minimum acceptable noise (SNR=10:1), but better than what cellphone camera makers consider minimum acceptable noise (SNR=5:1, I think).

But SNR values can't be directly compared, because you must correct for sharpness.  A camera might have really horrible SNR (like 1:1), but I can make the number better by just blurring out all the high spatial frequency components.  The measure of how much scene sharpness is preserved by the camera is MTF (stands for Modulation Transfer Function).  For reference, SLRs mounted on tripods with top-notch lenses generally have MTFs around 40% at their pixel spatial frequency.

In summary, sharpening can double the high-frequency MTF by reducing SNR by a factor of two.  Fancy denoise algorithms change this tradeoff a bit, by making assumptions about what is being looked at.  Typical assumptions are that edges are continuous and colors don't have as much contrast as intensity.

The atmosphere blurs things quite a bit on the way up, so visible-band satellites typically have around 7-10% MTF, even with nearly perfect optics.  If we do simple sharpening to get an image that looks like 40% MTF (like what we're used to from an SLR), that 20% albedo object in the shade will have SNR of around 2:1.  That's not a lot of signal -- you might see something in the noise, but you'll have to try pretty hard.

The bottom line is that recent, fast CCDs have made it possible to use area-array instead of pushbroom sensors for survey satellites.  SkyBox Imaging are the first ones to try this idea.  Noise and sharpness will be about as good as simple pushbroom sensors, which is to say that dull objects in full-sky shade won't really be visible, and everything brighter than that will.

[Updated] There are a lot of tricks to make pushbroom sensors work better than what I've presented here.

  • Most importantly, the sensor can have more rows, maybe 1000 instead of 128 for 8 times the sensitivity.  For a simple TDI sensor, that's going to require bigger pixels to store the larger amount of charge that will be accumulated.  But...
  • The sensor can have multiple readouts along each pixel column, e.g. readouts at rows 32, 96, 224, 480, 736, and 992.  The initial readouts give short exposures, which can see sunlit objects without accumulating huge numbers of photons.  Dedicated short exposure rows mean we can use small pixels, which store less charge.  Small pixels enable the use of sensors with more pixels.  Multiple long exposure readouts can be added together once digitized.  Before adding these long exposures, small amounts of diagonal image drift, which would otherwise cause blur, can be compensated with a single pixel or even half-pixel shift.

[Updated] I've moved the discussion of whether SkyBox was the first to use area arrays to the next post.

Thursday, October 31, 2013

Hyperloop Traffic

This is a huge post, about a subject that may not be terribly interesting. I suspect most of you will want to skim all but the first section, and come back later when I refer to this post from later posts.

Bottom line: If Hyperloop can get daily commuter traffic, at first within the Bay Area and Los Angeles areas, and later between then, then it can gather at least $7b/year of revenue. This is much larger than the $2.2b/year of revenue from the California High Speed Rail projection.

Daily commute traffic is the most important market. The better Hyperloop addresses this market, the more revenue it will get.

The big picture
I have looked at the California High Speed Rail project’s expected traffic volume (example here).  They are expecting an average of 32,600 people/day to take the train between the LA basin and the Bay Area, and another 8,800 people/day between San Diego and the Bay Area.  For comparison, 29,000 people/day currently fly those routes.  So they are expecting everyone who currently flies to take the train instead.  While this is possible, it’s neither likely (door-to-door times using the train will be slower for most people), nor sufficient (it doesn’t bring in enough money), nor interesting (replacing one service with an equivalent doesn’t grow the economy).


The required investment is $68 billion and they expect $2.2 billion/year in revenue.  That’s just not enough revenue.  The goal is apparently to break even on operating costs and not need government subsidy, which I find appalling.  Of what use is a train if it doesn’t get anyone anywhere faster and it doesn’t make money?  About the only other thing it might accomplish is removing traffic from some other system that would otherwise have to be expanded.  The trouble is that the overcrowded system most in need of relief is local highways, and the HSR doesn’t do anything about that.


I think Hyperloop should have three goals:
  • Most Californians should see decreased travel times and improved travel flexibility.
  • 6% return on capital invested.
  • Massive new economic activity beyond the billions spent on the transport system directly.


To bring in an order of magnitude more revenue, Hyperloop must be used by a lot more people a lot more often.  There is only one way to do that: Hyperloop must significantly improve the daily commutes of a million Californians.  Just as the freeway system allows drivers to bypass most surface streets for journeys longer than 20 minutes, Hyperloop must allow drivers to bypass most of the freeway system for journeys longer than 40 minutes.


The average California commute is about 30 minutes.  12% of Bay Area and Los Angelino commuters accept commutes at least an hour long.  There are two opportunities here.  The first and more immediate is to cut 20 or more minutes out of hundreds of thousands of existing commutes within the Bay Area and Los Angeles.  The second is to enable daily commuting between Northern and Southern California, and over larger distances in general.


Practical commuting over distances like this will cause massive changes, just as the automobile disrupted the previous shapes of cities.  Hyperloop can bring together the labor markets in Northern and Southern California, open up gigantic new areas of real estate, save Californians perhaps a hundred million hours a year, and attract a half million passengers a day. (Em, my numbers don't actually support a million per day.)


As detailed below, I project revenue of at least $7b/year.
  • $2.8b from existing north/south traffic
  • $2.3b from existing commuters
  • Eventually, at least $1.9b from new long distance commuters, and perhaps multiple times this much.



The key to faster commuting is quick transitions between Hyperloop and ordinary car travel, so I have diverged from Elon Musk’s proposal.  I will summarize here and leave the details to another post.

  • The capsules I envision have no seats at all -- they are primarily car ferries.
  • Security would be the same as on our freeway system -- open access and zero delay, along with police surveillance.
  • The time between capsules while underway would be 1-2 seconds, similar to that of cars on the freeway.
  • I envision routing the tubes underwater.  I just don’t see voters accepting massive overhead tubes in cities.
My last point of departure is that I propose to carry truck traffic for more diversified revenue.

Northern California to/from Southern California non-commute traffic
The following analysis leads me to expect that, perhaps five years after initial operation, the north-south link would carry 26 thousand one-way revenue-generating capsule trips per day, from the replacement of trips that people take today.
  • 10k/day replace I-5 truck traffic
  • 8k/day replace I-5 car traffic
  • 7k/day replace flights and subsequent car rentals
  • 350/day replaces flights which are segments of longer flights


To be attractive to truck traffic, a north/south capsule ride must be priced around $300, which makes a car ride $75 and a bus ride under $20.  North/south revenue will be around $2.8b/year.

Why Trucks?

Carrying 18-wheeler trailers will require a substantially bigger capsule and tube than carrying sedans, and so substantially more capital investment.  I don’t have an estimate of how much more capital investment, but I do have an estimate for the expected revenue from truck replacement traffic: about $1b/year from 10k capsules/day on the north-south link.  This is perhaps 15% of the total revenue stream.


Nationally, people spend one-third as much on truck freight as on car travel, but they spend twice as much on truck freight as air travel.  This leads me to believe that truck replacement revenue for Hyperloop will eventually be more like 20% of the total revenue stream.
ca. 2009
tonne-km freight
(million)
revenue
($/tonne-km)
passenger-km
(million)
revenue
($/pass-km)
2010 user costs
(billion $)
Car


4,507,134
0.168*
$757*
Truck
1,929,201
0.113


$250
Air
17,559
0.671
887,941
0.075
$110
Intercity Rail
2,309,811
0.021
9,518
0.191
$  50


The decision to carry trucks will hinge on the return off an incremental billion dollars of revenue versus the incremental investment for bigger tubes.

I-5 Truck bridge case: 10k capsules/day

A fleet operator with tractors in both LA and SF can move freight between the two more cheaply over Hyperloop than over I-5.

Over-the-road truck drivers (the ones on the road for two weeks at a time) are paid $0.19 to $0.25/km.  The vehicle depreciates $.06 to $.07/km.  They burn $0.27/km of diesel.  This adds to around $0.55/km in 2013.  Trucks averaged 11.3 cents per km-ton in 2009, which suggests the average load was around 5 tons, which seems reasonable.

So, the 600 km from SF to LA costs a truck operator around $330.  It’s possible for Hyperloop to charge a premium, because the capsule trip gets the load to the destination 5 hours sooner.  But the premium will only be paid for a small number of loads.  In order to get most of this business, a capsule trip will cost around $300.

The current truck traffic on I-5 is 10k trucks/day (one-way).  10k capsules/day is more traffic than I expect from air traffic replacement.  Because the truck bridge case will also be more price sensitive, it will probably set the capsule trip price for long-distance routes..

Payloads

The initial payloads with the greatest revenue potential are cars and 18-wheeler trailers, and eventually busses and container freight.



Max weight
Frontal area
Length
*
10.5 tonnes
1.7 m x 2.0 m
21.0 m
*
30.8 tonnes
4.12 m x 2.43 m
16.15 m
(just the trailer)
*
23 tonnes
3.5 m x 2.6 m
13.7 m
*
32.5 tonnes
2.9 m x 2.5 m
13.7 m

The containing capsule will have a payload diameter of 4 to 5 meters.  The larger number is if we wish to back standard 18-wheelers directly into the capsule.  The smaller number is if we are willing to take the wheels off the trailer first.

Tube diameter will be 6 to 7 meters, about 2x that of the Hyperloop-alpha proposal.

I-5 Car bridge case: 8k capsules/day

Here’s an interesting statistic: more people drive from northern to southern California than fly: Caltrans: 2011 California Traffic Volumes

There are currently 30k cars/day travelling between LA and SF, each burning $60 of fuel and 6 hours of driver time.  As above, four cars can share a Hyperloop capsule, with an amortized ticket cost of $75/car.  The driver can save nearly a day for $15.

I’ll assume nearly all drivers will take the Hyperloop, and traffic may increase due to greater convenience.  This would be 8000 capsules/day.

Flight+rental case: 7k capsules/day

Consider someone taking a flight down to LA, then renting a car for 5 days.
Shuttle $70 1 hour (at one-way, might have to pay 2-ways, or pay parking)
Security 1 hour
Flight $138 1 hour (one way)
Car $188 30 min (5 days, compact)
Total $396 3:30

8 million people do this every year between the Bay Area and Los Angeles or San Diego.

The Hyperloop is a total win, even if only a single car takes a capsule.
  • 4 cars share a $300 ticket, $75 each, about 5 times cheaper, and you get your own car.  And, you don’t have to pay to park your own car at the airport.
  • Assuming it takes 30 minutes to drive to the Hyperloop station, and an hour to get to LA, you’ve saved two hours in each direction.  
  • If there are more people in the car (say, a family of 4), you must buy extra plane tickets and shuttle fares.  The Hyperloop option costs nothing more.

Assume Hyperloop gets all of this traffic.  Assume average vehicle loading is 1.5 people/car, this is 22,000 people/day and 14,800 cars/day.  Assuming an average of 2 cars/capsule (many people will want their own capsule), that’s 7380 capsules/day or $800m/year in revenue to Hyperloop.

At 20% to 75% of the price, and less than half the time, we should expect an increase in this traffic volume, and Hyperloop will see all that additional traffic.

Airport Shuttle flight case: 350 capules/day

Not all the people flying between the Bay Area and Southern California are renting a car.  For 2.5m people per year, this hop is one of at least two.  For instance, when flying from San Francisco to Phoenix one generally stops in LAX along the way.

Airports could run a bus-over-Hyperloop service between airport pairs to move all this traffic off airplanes.  They win in two ways: first, they open up runway slots to more profitable longer-distance routes.  Second, the airports essentially get into a high-margin local airline business.  Finally, the airlines win because they can pack their airplanes better, since passengers may be more willing to accept a one-Hyperloop, one-plane trip instead of a nonstop, if the Hyperloop-using hop gets them to the destination sooner.

It’s about 7000 people/day.  Assuming busses with 20 people (⅓ full), that’s about 350 capsules per day.  This would be incredibly convenient for passengers, as there would be a bus leaving from each of the three major airports in each area about every 20 minutes.

Commute traffic

Just the traffic from replacing portions of existing long commutes is huge:
  • 105k/day Bay Area commute capsules (half of all existing >50 minute commutes)
  • 125k/day Los Angeles commute capsules (¼ of existing >50 minute commutes)

Capsule rides would average about $60 and carry four cars.  Yearly revenue would be $2.3b/year. However, this estimate is sensitive to the distribution of Hyperloop terminal, and the time it takes to get through these terminals.

Quick trips really matter. For every minute saved, per trip, I expect an additional 9k/day commute capsules and $135m/year in revenue. This traffic increase is strongly nonlinear, however. If we could get the trip times down to around 35 minutes per trip, we'd expect to see 40k/day extra commute capsules per minute saved (and $600m/year in additional revenue).

Extra terminals (in the right places) would really matter, especially in inland Los Angeles, Orange, San Diego, and Contra Costa counties, where I expect each terminal to support 16k/day commute capsules and $235m/year in revenue.

Because the north/south door-to-door time will be about an hour, it will be possible to have a daily commute between northern and southern California.  Even if just 1% of commuters use Hyperloop over long distance runs, this is a colossal amount of traffic: 25k capsules/day, bringing in $1.9b/year.

Existing Bay Area commuter case: 105k capsules/day

The Bay Area has the largest fraction of long distance commutes in the nation.  2% of commuters travel at least 50 miles and 90 minutes, each way.  About 12% of commuters travel 60 minutes each way, and the average commute is 30 minutes.

Using the 2011 U.S. Census ACS data, I predict there are 420,000 commuters in the Bay Area with at least a 50 minute commute.  As shown in the map below (created with Trulia’s excellent tool), at least half of these commuters could be within 15 minutes of a Hyperloop terminal, and so could reduce their commute by 20 minutes and 15 miles with a Hyperloop jump.  So a local Hyperloop (with 21 terminals as shown) would have a market of around 420k car trips per day.  At four cars per capsule, that’s 105k capsule trips per day.

20 miles of commuting costs around $4.05 each way (using AAA’s $0.27/mile incremental cost for medium sedan in 2013).  20 minutes of the person’s time is worth something as well, at least $6.  Each local car trip could be sold for $10, so yearly revenue for trips within the Bay Area would be $1.05b.

Existing Los Angeles commuter case: 125k capsules/day

The Los Angeles commute market is both more lucrative than the Bay Area’s (620,000 commutes are at least 60 minutes, 1,100,000 are at least 50 minutes) and more problematic, as more of the population is farther from the water.  Nonetheless, a Hyperloop can be run down the coast and reach perhaps ¼ of the population in 15 minutes or less.
Again using ACS data, I predict there are 1 million commuters in Los Angeles with at least a 50 minute commute.  The core 8 Hyperloop transfer stations shown would service 250,000 of these commuters and bring in $1.25b/year.

The map above shows a terminal in the southern San Fernando Valley, which would require a 10 mile tunnel bore through the Santa Monica mountains.  There are several other places where tunnel bores or perhaps cut-and-cover through lower-cost real estate could get to lucrative markets.  The map above also shows 5 terminals in Santa Barbara, Ventura, and San Diego, which are not currently suburbs of Los Angeles for many commuters.

Tunneling cost is not necessarily prohibitive: A 5 mile x 15-foot diameter tunnel was recently completed under San Francisco Bay for $286 million. The tunnel imagined above would be three times the diameter and twice and long, so perhaps four times the cost. A $1b capital outlay to bring in $150m/year seems quite reasonable.

There is a significant externalized benefit: these 250,000 commuters would no longer be on the 405 freeway for most of their trip.  The Hyperloop would unload a huge amount of traffic from the freeway system, which should speed up even those commutes that can’t be serviced by Hyperloop.