Ideas about Hurricanes

In the spirit of Hurricane Irene east of Florida right now…

I’ve never posted these ideas before, but I feel these are worth thinking about, for all the hurricane lovers. None of these, as far as I know, have been discussed extensively in the literature. Any, all, or none of these may have any merit….I do not have enough physics knowledge yet to say.

Eye Size
…in general, inversely proportional the the amount of atmospheric instability. Hurricanes with large eyes and more annular characteristics form in relatively low-instability and low-SST environments, such as the East Pacific or the Atlantic. Pinhole eyes tend to be more common in late October-November, since the lapse rates tend to be better later on in the season (the air temperature responds faster to a change of season than water temperature). They are also more common in regions of warmer water, such as the Tropical West Pacific or the Caribbean.

Another possibility is that anomalously low or high environmental pressures correlate to smaller and larger eyes, respectively. Pressures in most of the East Pacific and Atlantic basins are higher due to the strength of the subtropical high. However, in the West Pacific, the southeastern East Pacific, and the Caribbean, occasional monsoon troughs or westerly wind bursts lower the surrounding atmospheric pressures… and in these areas pinhole eyes are more common.

Eyewall Replacement, Initiation
From what I’ve observed, eyewall replacement cycles (ERCs) may not be completely random. Rather they seem to be responses to external perturbations in the environment. Factors below.

1) Diurnal maximum. Again, since water cools slower than air, lapse rates increase at night over the open ocean, which increases atmospheric instability. This increased instability may allow outer rings of convection to intensify and consolidate. Once this occurs, the inner eyewall gets choked of its inflow and the eyewall replacement essentially begins. (See Felix 2007 for a good example of a nocturnal initiation of an ERC.)

2) Dry air entrainment/shear. If they can penetrate far enough, these factors can destabilize the inner core. In a quasistatic hurricane I would believe that the inner eyewall is strong enough to induce localized subsidence around its periphery, preventing consolidation of an outer eyewall. Destabilizing the inner eyewall, thus, would allow outer eyewalls to consolidate more easily.

Large hurricanes tend to be very vulnerable to this. First, they have more outer rainbands that can form an outer eyewall. Also, more thunderstorms = more inflow drawn into the storm = greater susceptibility from pockets of dry air due to downsloping from a neighboring landmass or a stable airmass. Many Caribbean Cruisers go through multiple ERCs, and I hypothesize that the downsloping off Hispaniola and S. America may be to blame. For the same reason W. Pacific typhoons always seem to enter a stage of ERCs when they reach a certain latitude, when the surrounding environment becomes significantly more stable.

It’s the same idea for supercell updrafts, in fact. The more marginal the environment, the more susceptible the parent updraft is to cycling. The most powerful updrafts in the most intense environments remain quasisteady for a long time because the subsidence around the parent updraft kills any nascent updrafts in the flanking line. (For example, many tornadoes on 4/27 traveled over 100 mi – that’s takes 1.5 hr for a storm motion of 70 mph.)

3) Land. Many hurricanes seem to start ERCs just as they make landfall. This may not be coincidental. Air around the periphery of the hurricane slows down because land has more friction than water, so the air piles up and converges, which aids in lift. Besides the drier land air destabilizing the inner core, this frictional convergence can help initiate the development of an outer eyewall just as the hurricane nears/moves onshore.

4) Stability of Eyewall. This IS in the literature I think… the circumference of eyewalls increase proportional with eye radius. Pinhole eyes have fewer thunderstorms due to the small eyewall circumference, making them susceptible to even small perturbations. Moreover, the close proximity of individual thunderstorms within the eyewall increases the probability of negative interaction between them. Small eyewalls are just inherently unstable and generally will collapse with time.

Eyewall Replacement, Completion
The better the environment, the faster they get done. The factors that destabilize an inner eyewall do more damage on a more exposed, weaker fledgling outer eyewall. And as long as the outer eyewall remains weak, the inner eyewall still has life. So the storm seems to keep going on an endless ERC, when in fact neither eyewall obtains dominance because of dry air, shear, or cooler water.

Land can actually help finish an ongoing ERC. Frictional convergence affects outer rings of convection first, namely the outer eyewall – so a disorganized outer eyewall can get an extra boost, close off, contract, and kill off the inner eyewall just as the storm comes ashore. In these instances it can seem like the storm is strengthening on land, when in fact frictional convergence has simply helped connect loose ends. This temporary boost doesn’t last long though. (A similar morphology can sometimes be seen in storms that have weak, loose cores – see Fay/Ike 2008, among others.)

An Anti-Beta Effect
The Coriolis force varies with latitude (its stronger at the poles), so hurricanes have an impetus to move north of due west when other steering factors are absent. This is known as the Beta Effect. But are there instances when the tendency is to resist poleward acceleration?

I think so. Consider the relative inertias of differently-sized hurricanes. I suspect larger hurricanes will be harder to turn poleward just by Newton’s 2nd Law – even if the Coriolis force is fictitious. When a steering influence is weak or highly temporal, that may be the difference between a recurvature and a turn back west, though I have no direct evidence to substantiate that claim.

I should really include more of these to scatter throughout blog posts…

Stairs or Elevators? –Part 1: The Data.

By the end of summer school, I had lived at the Berkeleyan long enough to notice how darn slow the elevator was, and how much faster taking the stairs could be. So in response, I decided to quantify the time going up/down the elevator vs. going up/down the stairs. My analysis consisted of three Scenarios:

(1) going up/down +/- 1,2,3 and 4 floors on the STAIRS at a SLOW pace… slower than the average person would walk up or down for any given moment;

(2) going up/down +/- 1,2,3 and 4 floors on the STAIRS at a FAST, jogging/running pace… faster than the average person would go up or down the stairs in an average instance, but similar to how fast someone would go if they were rushing to class;

[Taking the middle ground between the two lines would probably approximate walking time of an average person who’s not rushing.]

(3) going up/down +/- 1,2,3, and 4 floors on the ELEVATOR.

For Scenarios (1) and (2), 3-4 “walks”/trials were timed on each of the +/- 1, 2, 3, and 4-floor intervals (8 sets of trials for each scenario, 4 for up and 4 for down). The averages of each set in each scenario were plotted and a linear regression was fit to the data.

For Scenario (3), 3-4 MOVING trials were timed from the beginning of elevator acceleration to the beginning of elevator deceleration on each of the +/- 1, 2, 3, and 4-floor intervals (8 sets of trials for each scenario, 4 for up and 4 for down). Additionally, 8 separate (“WAIT”) trials were conducted for each of elevator moving UP and elevator moving DOWN (16 total combined) to determine time from press button to the beginning of elevator acceleration, and from the beginning of elevator deceleration to elevator door opening. The averages of the “MOVING” trials for each set were summed with the corresponding average UP or DOWN “WAIT” to find the total time spent in the elevator, and plotted. And again a linear regression was performed on the data.

The raw data may be downloaded here: ElevatorProj_1

And the chart of time vs. number of floors:

The dashed best-fit lines indicate going DOWN; the solid best-fit lines indicate going UP.

They are not reported on the chart, but the best-fit equations and R2 values are:

Stair SLOW UP: f(x) = 13.836x; R^2 = 0.9995

Stair SLOW DOWN: f(x) = 13.42x; R^2 = 0.9993

Stair FAST UP: f(x) = 6.574x; R^2 = 0.9978

Stair FAST DOWN: f(x) = 6.07x; R^2 = 0.9984

Elevator UP: f(x) = 4.091x + 14.54; R^2 = 0.998

Elevator DOWN: f(x) = 4.663x + 14.42; R^2 = 0.997

In all cases, f(x) represents the time and x represents the number of floors up or down – not the same as the start or destination floor! The slope is the time it takes per floor, the y-intercept of the elevator graph is the wait time derived from the regression (slightly different from the one derived from experimental data).

Note that the +/- 3 floor SLOW walk trials were thrown out for inconsistencies with the data.

So, remarks. Note how the elevator is ALWAYS slower than running up the stairs, even though it technically travels faster (lower slope = less time to go up one floor). However, because of the ~14 sec wait time, the elevator seriously lags, and it doesn’t catch up if you go up four floors (i.e. L->5) or less. It CAN be nearly caught up to you if we extrapolate to five floors (L->R). That will be a good test to conduct for future studies. I predict that a person taking the elevator up to the roof will arrive almost at the same time as a person running up, given that the elevator is at the lobby at time zero.

For the slow walk, it can still be faster, but the elevator catches up rather quickly. Nevertheless, this graph does provide a strong incentive for walking up 1-2 floors, and as we’ll see, perhaps more…

Note that it is faster to go up the elevator than to go down, but it is slower to walk up the stairs vs. to go down. This suggests that it might be advantageous to go UP the elevator, but go DOWN the stairs.

More to come in Part 2…

Fatness, In Perspective

Someone wants to be the world’s fattest woman.

And to do it, she’s going to consume 22,000 Calories per day, until she reaches 1,600 lb.

FOOD FOR THOUGHT: SUSANNE’S DAILY DIET
Breakfast: 6 x eggs scrambled, cooked in butter 468 cals. 1/2 pound bacon 1,168 cals, 4 x potatoes as hash browns 672 cals, 6 x pieces toast with butter 600 cals, 32 ounce cream shake 1,160 cals. Snacking 1 x bag of animal cookies 1,950 cals, 2litre bottle of soft drink 800 cals, 1 x 10.5 ounce bag of barbecue flavour crisps 1,650 cals, 3 x ham and cheese sandwiches 1,576 cals.
Lunch: 3 x beef, bean and green chilli burritos with 1 x cup of sour cream 1,453 cals. Salad (1 head lettuce, 1 cup cherry tomatoes, 1 cup carrots, 1 cucumber, 1/2 cup ranch dressing, bacon bits, 1 cup crumbled cheese, 1 cup chicken 1,479 cals.
Dinner: 12 x filled tacos + 1 x cup sour cream 4,906 cals, 2litre bottle of soda 800 cals, Dessert 8 x scoops vanilla ice cream 2,080 cals, 1 x small pan of brownies 1,200 cals.
Total: 21,962 calories

That’s ridiculous. So, in the spirit of my discovery earlier today that 500 Calories = 2000 kJ (2 MJ), I decided to see how much 20,000 Calories can do. For reference: 1 MJ = the kinetic energy of a 1-ton vehicle travelling at 100 mph. Source

I did a little Wolfram Alpha’ing.

So 20,000 Calories = ~90 MJ = [notables:]
…the rest energy of 1 microgram of mass. That may not sound like a lot, but that’s about 1016-1017 molecules, on average, of an average substance.
…an extra $3.74 of electricity (electricity is cheap!)
…26 kilowatt-hours
…the amount of energy that could light up a 60-watt lightbulb for 1500 hours, or a little over 2 months.
…the energy combusted by 2/3 of a gallon of gasoline (about 0.69 gallons). You can drive about 14 mi on a 20-mpg car with that much!

That’s a LOT of food… and a lot of energy for someone who’s on a wheelchair all day.

The Principle of Diminishment

Consider the Law of Diminishing Returns, or the Law of Diminishing Marginal Utility in economics. It states that for every additional unit of input added, the amount of output (or utility, whatever) relative to the unit input/cost decreases. The upshot is this: consider two units of input. The output from the two units of input combined will always be less than the sum of the output from each unit of input separately. I guess that may have been confusing to understand, so let I_1 and I_2 be units of input, and let O(I) be the output that results from that input. And so, this paragraph more or less means:

O(I_1 + I_2)<O(I_1) + O(I_2).

Or, generalizing for n inputs,

O(I_1 + ... + I_n)< \sum_{j=1}^{n}I_j.

Now if we think in terms of operators, we can say the “output operator” is nonlinear. That is, contrary to common belief, the gains from hard work do not scale accordingly most of the time. (That doesn’t mean you shouldn’t work hard though, of course.) But I digress.

A few days ago, I had this epiphany: could this nonlinearity exist in other aspects of my life? The more I thought about it, the more I realized that the answer was yes. Examples to follow:

I’ve been very stressed in the past few days, from grades and other things. And I was thinking, well this is bad and I feel very uncomfortable from this, but this does not feel as bad as I would expect from a sum of the stresses from each factor separately. Sometimes I would worry about a lot of one stress factor but very little of the others; other times I would worry a lot about a couple of the others but not of the original one. Each stress factor kinda interferes with every other stress factor trying to stifle me. So maybe this diminishing sum thing works with stress too? [This was the motivating factor for this post.]

And then consider music. If I try to alternate between two pieces of very good music, I notice that each individual piece does not sound as good as it would’ve if I just listened to that piece alone. That is, the total utility from the two pieces combined is less than the sum of the utilities I would’ve gotten from each individual piece.

Also consider two enjoyable activities – eating and watching TV. One derives less satisfaction doing both at the same time, than if he/she does each individually at separate times. There have been multiple studies that corroborate this.

We can say, then, that there could potentially be a principle of diminishment that pervades nature: additional response will decrease for every additional stimulus added. Now it has been said that nature follows from mathematics, which is quite a blessing for us. So we wonder if there some mathematical formulation to the principle of diminishment, and there is.

Consider two arbitrary vectors in a space, \vec{a} and \vec{b}. There is a theorum, known as the Triangle Inequality, that states:

||\vec{a}+\vec{b}|| \leq ||\vec{a}||+||\vec{b}||,

where the double brackets indicate the magnitude.

We can see that equality only occurs when the two vectors are parallel and thus linearly dependent, so for any two linearly independent vectors,

||\vec{a}+\vec{b}|| < ||\vec{a}||+||\vec{b}||,

which is the form more or less taught in HS Geometry. Graphically we can see that the vectors form the sides of a triangle (thus the name):

To cut this short post because I am hungry and I’m failing at writing today, the upshot is: for any two independent, unrelated stimuli, they will tend to interfere with or somehow dampen each other thus effecting this observation. Equality might be reached if the stimuli are “linearly dependent” – if they are directly related to each other (for example I guess, eating and smelling your food).

P.S. this property is called subadditivity, apparently. “Principle of Diminishment” sounds better though.

I just wrote my first LaTeX document!

I feel like a professor now! (Oh boy.)

So yeah, it’s kinda long (5 pages; too long for a single blog post), but if you’re interested, feel free to download the PDF below.

The content is an analysis on how to get the most bang for your buck. If you’re somewhat stingy like me, it’ll be an enjoyable read. Feel free to critique and comment… let me know about any errors. (I’m sure there’ll be some.) It’s not completely 100% done yet, but it’s good enough to post, I suppose.

PDF –> UtilityvCost

.
.
.
.
.
.
.
.
.
.
.

For those who don’t feel like opening the PDF, but who are still stingy, a relevant, non-mathematical snippet:

Eqs. (18)–(20) combine the price and the price-dependent quality factors, and they state: if the percent increase of quality is larger than the percent increase of price, maximum satisfaction gets closer to maximum utility for higher prices. If, on the other hand, the percent increase of quality is smaller than the percent increase of price, maximum satisfaction gets farther away for higher prices. So higher prices are not always bad, but again one needs to weigh the factors. And in different situations, the results may end up quite different, because of how you assess the quality.

For instance, consider food. How much more quality will a fancy five-star restaurant give you, compared to *decent* family-owned joint? Probably a lot. But how much more expensive will it be? A LOT. But what if you’re a food connoisseur? In that case, you will find much more quality from top-notch food than someone who is just eeking out a living. So you might want to opt for fancy restaurants more often than a college student–you’ll get more bang for your buck that way, while for the college student, that may not be the case.

Now we take the same question and ask how hungry we are. Are we not very hungry at all, decently hungry, or starving? If you’re not hungry, fancy food and decent food will taste about the same, and neither will taste too awesome. So the percent increase in perceived quality will not exceed the percent increase in price. Likewise, if you’re starving, all food will taste amazing…and again your perception of quality will not increase a lot no matter how awesome the food truly is. The middle case is probably when you want to go to the fancier, pricier restaurants, because you will be able to best differentiate top-notch foods from marginal-quality foods.

In cases where your perception of quality changes very little with price–either from ignorance (no shame!) or the fact that the good is made the same everywhere–Eq (12) reduces to Eq (8). And then the cheaper item will always give you more bang for your buck, as we originally hypothesized.

P.S. Reason for my excitement: I LOVE LaTeX’s font and overall appearance… like omg. Don’t be hatin.

YouTube Results

Recall I guessed a function Y(t) (or rather, its rate of change) for YouTube Views vs. Time of popular videos. I was interested enough in the idea such that I actually tested it – with three popular YT videos:
1) By My Side – David Choi MV;
2) Rebecca Black – My Moment; and
3) Nigahiga – 4,000,000 Subscribers.

For those who do not wish to go through the original post, here was my original guess:

We’ll see how well it stands up against experimental data.

1) David Choi

For this video, I recorded only the number of likes for ~6 hours as a proxy; the YouTube data for views were deemed unreliable. The results are as follows:

The solid line is a logarithmic fit to the data. It’s a very good fit with an R2 value of 0.994. But alas, there is no oscillatory nature to the data, at least from an initial glance. So perhaps the oscillatory component is negligible or doesn’t exist. But what about the other part? Since the derivative of a logarithm is 1/x, this data fits that part of the guess perfectly. Fantastic. However, note that the fit starts diverging from the data points for large times. So I figure a correction might be needed.

The possibility of a correction, along with the peculiar lack of oscillation with day/night, prompted me to do another, more rigorous analysis, on a much more popular video.

2) Rebecca Black

I was thrilled when Vicki alerted me to this <6 hours into the video's posting. For this video, I tried taking data points on irregular but short intervals of 3 hours or less. The data gets more loose after a few days as I was not near my computer the weekend after the video's posting. And here is the data:

Note that the logarithmic fit again performs very well with R2 values over 0.9, but now the oscillations show up.

Now for this video I had enough data points to approximate the instantaneous rate of change of views at all times by assuming

for any two adjacent data points at t1 and t2 such that tmid is the average of t1 and t2. For better accuracy, we only did this for adjacent data points that were less than four hours apart.

The derivatives are plotted below:

And WOW, look at the oscillations! But as it turns out, the exponential fit works out the best here (which is why those fits appear on the graph). And yes, I know the R2 values take a hit because any attempt of spreadsheets to do a x-k fit will blow up at 0, but more importantly, the curvatures of the data and 1/x don’t match. A graphical comparison of the shapes of 1/(x+1) and e-x demonstrates this nicely:


(Source; green is e-x, blue is 1/(x+1). Red graph will be explained shortly.)

However, neither one works fully alone.
–We just demonstrated that 1/(x+1) does not work as well for Rebecca Black’s derivative, but does for David Choi, assuming the derivative is as good as a fit as the plot itself.
–The better fit for Rebecca Black’s plot itself, however, is logarithmic, which is unusual given the prior statement. Notice the fit in the graph three images above becomes better for large times, whereas for David Choi, the fit becomes worse.
–In either case, the exponential derivative should not work for large times because that converges to zero very quickly, while we reasoned that it should converge much more slowly.

This is where the extra graph in the figure comes in. The sum of 1/(x+1) and e-x, the red graph, looks very much like the exponential alone – Rebecca Black style, but does not converge to zero as fast, as would be expected. Varying the shift of the rational component (not shown) could make the derivative look a lot more like 1/x… as would be expected from a David Choi perspective. Either way, the derivative looks completely 1/x-ish after a long time, as the exponential dies out rapidly, and this is supported by Rebecca Black looking completely logarithmic after a long time. And by shifting the graphs, we can alter the appearance of the graph for smaller t‘s.

So ignoring the oscillations, which we’ll add back later, we assume a form


[[[EQUATION 1]]]

where everything not symbolized by t are constants to be determined later.

We’re not done yet though – we still haven’t figured out why Rebecca Black oscillated but not David Choi. Or what exactly the “correction” is. Onto the next video…

3. Nigahiga

The plot of views vs. time:

(The fit really goes awry at large times in this case. Yikes.)

Time derivative, in the same fashion as for Rebecca Black:

And here we see slight oscillatory motion, but it dies to a negligible amplitude very quickly. Weird. Or is it?

Consider a mass-spring system on a rough surface with friction. Assuming that the resistive force is proportional to velocity, a differential equation can be set up which has solutions of the following form:


(Source)

Where ΞΆ depends on the mass, the spring constant, and the coefficient of the resistive force.

Well, my belief, based on the data that I have collected here, is that we have something similar going on with YouTube videos. Depending on various factors, video popularity (quantified by the rate of change of views/likes over time) dampens over time… and it can either dampen without oscillating much between night and day (if at all), ala David Choi/Nigahiga, or it can dampen very slowly after several oscillations, ala Rebecca Black. The 1/x factor ensures that the derivative never reaches zero overnight, and such that the total views v. time does not reach an upper limit. The superposition of the dampened oscillation factor and the general 1/x factor in the derivative culminates in what we want. [I’ll add something about the interpretation of a lack of night/day oscillation later.]

Regarding the correction: we’ll just add an exponent to the 1/x factor. This is a work in progress, lol… don’t know what better things to do here.

UPSHOT

The general solution for the non-oscillatory video is EQUATION 1, with the correction:

The general solution for the oscillatory video is as follows:

Again all things not related to t are constants. The square in the cosine term prevents the derivative from going negative, which wouldn’t make sense.

Hopefully I’ll add more to this and maybe correct this again in the future. My original guess wasn’t too bad though, I suppose.

P.S. I plugged in some arbitrary constants to generate these plots that somewhat resembled the experimental data:

1) Rebecca Black

2) Nigahiga