Tuesday, September 6, 2016

Trigger warnings

I cannot praise a fugitive and cloister'd virtue, unexercis'd & unbreath'd, that never sallies out and sees her adversary, but slinks out of the race, where that immortal garland is to be run for, not without dust and heat. Assuredly we bring not innocence into the world, we bring impurity much rather: that which purifies us is trial, and trial is by what is contrary.
-- John Milton, Areopagitica

There continues to be discussion as to whether college courses should include “trigger warnings” for potentially upsetting content. No US college or university I know of requires this, one large survey found fewer than 1% of institutions of higher learning do, and the American Association of University Professors opposes them, so the discussion is largely hypothetical at this point, but as there is a vocal minority which argues they should be mandatory, it behooves us to consider whether or not that would be a good idea.

The arguments for trigger warnings are all over the map, but the main threads are:

1. This is a simple courtesy to wounded students; just basic politeness.
2. This is a medically necessary accommodation for students with PTSD and flashbacks.
3. Trigger warnings are necessary because presenting sexism, racism, etc. to students who may have suffered from it, without warning them that said horror is coming, normalizes the horror and thereby constitutes a microaggression against the sufferer which may contribute to silencing them and excluding their experience from the discourse.

The arguments against trigger warnings are also varied and include:

1. It is the responsibility of adult students to deal with their emotional responses to college coursework.
2. Requiring trigger warnings, and thus implicitly extending the promise that you will NOT be exposed to emotionally trying materials without forewarning and consent, damages the university as a location for the free exchange of ideas.
3. Trigger warning infantilize students.

Before I examine trigger warnings, let me define a few terms (It’s important to be precise. We’re not talking about potentially civilization-ending anthropomorphic climate change today; we’re talking about 18-year-olds trying to read Proust and Kant, in other words, something really important.) A text is used here to mean any cultural artifact students might make an object of study, including fiction and nonfiction, pictures and movies, TV scripts or poetry. PTSD or post-traumatic stress disorder is a mental illness, estimated to effect between four and six million people in the United States, the symptoms of which can include flashbacks, which are technically referred to as re-experiencing symptoms.

In searching for information about trigger warnings I can find no evidence that they work in the sense of aiding those with PTSD, and no evidence of harm when they are incorporated into college coursework. In fact, as far as I can tell, neither question has been studied. Until and unless they are, I am reluctant to form a strong opinion about trigger warnings. When it comes to their effect on students as a whole, I almost despair of ever getting any data to work with, but if you are going to advocate for trigger warnings in the name of PTSD sufferers, the absence of any evidence that they help should trouble you.

Now, today, both potential benefits and potential harms from requiring trigger warnings can only be hypothesized.

Might trigger warnings not help those with PTSD? It might seem self-evident that as it is unpleasant to be “triggered,” a warning is preferable. It may not be so. If trigger warnings encourage avoidance, a common problem for sufferers of PTSD, they might do harm. Telling a sufferer a “trigger” is coming might blunt their reaction but it also might, by suggesting provocative content is coming, strengthen the reaction, “priming the pump” for re-experiencing symptoms.

Trigger warnings inevitably direct attention towards the PTSD-afflicted student. This may be good or bad. Some mental illnesses, such as somatoform disorders, benefit from frequent structured attention to the symptoms. Others, like borderline personality disorder or non-epileptic seizures, seem to get worse when the wrong kind of attention is given. Without data, it’s hard to say whether trigger warnings would help those with PTSD, hurt them, or make no difference at all.

I am talking as if the purpose of trigger warnings is to help students with PTSD and flashbacks (which might describe one out of a hundred college students, if that), but it’s clear that some advocates see a much wider role for trigger warnings than this. Consider the much-maligned and now suspended Oberlin trigger warning guidelines:
In an Oberlin class that contains 20 students, we estimate that there may be about 2 to 3 students in the class who have experienced some form of sexualized violence..  If 1 in 3 women and 1 in 4 men have experienced IPV, there can be at least 5-6 survivors of IPV in the class.  In other words, you may have taught and may continue to teach individuals who have experienced significant trauma. . . .

Oberlin’s community cannot afford to ignore sexualized violence, including intimate partner abuse and stalking.  Faculty can make a serious impact on students’ lives by standing against sexual misconduct and making classrooms safer.
But this concern for sexual abuse survivors is quickly subsumed in a much larger set of issues:
·  Triggers are not only relevant to sexual misconduct, but also to anything that might cause trauma.  Be aware of racism, classism, sexism, heterosexism, cissexism, ableism, and other issues of privilege and oppression.  Realize that all forms of violence are traumatic, and that your students have lives before and outside your classroom, experiences you may not expect or understand.
·  Anything could be a trigger—a smell, song, scene, phrase, place, person, and so on.  Some triggers cannot be anticipated, but many can.
·  Remove triggering material when it does not contribute directly to the course learning goals.
The initial goal of warning the student so they can prepare themselves quickly evolves, in the Oberlin guidelines, to getting the bad stuff out of the picture entirely. When instructors cannot get rid of it, they are instructed to apologize for it, in such a way as to close off any potential exploration of whether or to what extent a work is racist, classist, elitist, and so on. It is definitionally unclean.
  • ·  Tell students why you have chosen to include this material, even though you know it is triggering.  For example:
    • “…We are reading this work in spite of the author’s racist frameworks because his work was foundational to establishing the field of anthropology, and because I think together we can challenge, deconstruct, and learn from his mistakes.”
    • “…This documentary challenges heterosexism in an important way.  It is vital to discuss this issue.  I think watching and discussing this documentary will help us become better at challenging heterosexism ourselves.”
·  Strongly consider developing a policy to make triggering material optional or offering students an alterative assignment using different materials.  When possible, help students avoid having to choose between their academic success and their own wellbeing.
These quotes are from a larger policy which was roundly condemned and ultimately abandoned; there are certainly other ways to handle trigger warnings, but I think the Oberlin experience indicates that there is something much, much more complex and problematic going on in the debate over trigger warnings than simply being polite and considerate. There is a strong ideological perspective here which is presenting an agenda item as necessary to protect the mentally ill. 

Not only is the evidence that it will do so nonexistent, it further raises a concern any time people use the sick and vulnerable to define certain kinds of cultural expression as abusive or dangerous. 

This is a thing that always happens when something cannot be condemned in terms of the choices adults make, but which some people strongly dislike anyway, whether it is pornography, or violent video games, or homosexual characters in movies or television. The claim that “You and I are fine, but we must think of vulnerable” always seems to crop up in this context.

One interpretation of this debate, then, and I do not say this is the only one or the correct one,  is that for people wishing for the academy to more clearly and explicitly condemn racism, sexism, ableism, classism, etc. in the Western canon, this is a slightly modified won’t-somebody-please-think-of-the-children argument. Yes, they may posit, you could bring these texts into the classroom and let students and teachers tease them apart and find these aspects themselves, and decide what they think about them, but won’t-somebody-please-think-of-the-traumatized?

There are a number of things about mandatory trigger warnings I would describe as potentially harmful. Again, we haven’t gathered data on this and we don’t know. One, and this may be a minor matter, it makes more work for the instructors, who have to add the warnings to the syllabus. As a member of a profession (medicine) where we are being crushed by a mindset of "just one more" documentation requirement, this is near to my heart.

More seriously, introducing students to a text with a series of labels describing the ways in which it is potentially traumatizing encourages them to anchor upon the ways it which it is offensive even as they are first entering the author’s world and beginning to understand the author’s concerns and perspective. 

Labeling a text as traumatically racist, sexist, classist or even as containing violence or rape encourages people to approach a text via a prism of our modern values and concerns, prepared to be hurt and offended by what has been labelled hurtful and offensive.

The open-ended nature of what constitutes a trigger and which triggers are going to be labelled concerns me. Labeling rape and graphic violence, though it may be a good or a bad idea, is at least fairly limited. Extending trigger warnings to racism, sexism, classism, heterosexism, elitism and so on seems like an open invitation to a giant clusterfuck of disagreement not only about what type of thing is a trigger but of what kind of reference to it (direct or indirect, graphic or abstract) constitutes a trigger. Does a battle in a history book require a trigger? What about a massacre? Or a description of a slave auction? A list of slave auctions? A map of the transatlantic slave trade?

Activists of one kind or another will press to have examples of things offensive to them labelled as triggers. Jewish students will describe Palestinian nationalism as triggering. Palestinians will describe Zionism as triggering. Arabs will be triggered by Orientalism and trans students by cissexism. No one will want to be left out, since that would imply that their pain from the oppression they have suffered is less significant than other folks'.

What concerns me about this is not so much that we will end up with the wrong triggers, or too many triggers, as that the process itself is likely to be vicious, viriputive, and most of all endless.

What’s more, if in fact the experience of the student determines what is traumatizing and we are to “believe the student” as the Oberlin guidelines mandate, can we imagine a day Christian students describe homosexual sex in a novel as triggering? At most of our colleges and universities such a claim would be howled down in outrage, but this just underscores the fact that formulating a list of legitimately traumatizing subjects is an inherently political act. Black students will be warned about discussions of the slave trade: Southern students will not get warnings before discussing Sherman’s March. Which may be very fine and good, but it indicates the presence of unspoken assumptions and premises in the implementation of “trigger warnings” that have no place in the explicit theory.

It’s concerning to me that some activists are asserting a position of radical vulnerability, in which triggering things as said to paralyze them with horror. That does not seem like a stance that is sustainable or emotionally or spiritually healthy. While this may begin as a pose or a way of advocating for others, as soon as activists “win” by showing evidence of being triggered, more of them will start to experience those symptoms. We all respond to positive reinforcement. If we encourage student activists to wear a mask of extreme vulnerability, to some extent that mask will become reality for some of them, which is not desirable.

The best case I think can be made for “strong” trigger warnings covering racism, classism, albeism, transexism, etc., is that our society, like many ostensibly free societies, sustains and reinforces privilege by quasi-objective measures of capability or application that don’t take into account the different place marginalized people are coming from.

Take a hypothetical case of a university with ten swimming scholarships for the ten fastest swimmers in Pittsburgh. The standards of the scholarship are objective – but when you look into it you may find 30 public swimming pools in predominantly white neighborhoods, and 2 in black neighborhoods. Our “objective” scholarship doesn't take into account that difference in infrastructure between the two communities in Pittsburgh.

Proponents of strong trigger warnings argue something similar occurs with the traditional back-and-forth of the academy. The ability to assert one’s beliefs, argue for one’s perspective, and recognize and vigorously refute slights directed at you, your community (or one of your communities,) is not distributed equally. One of the ways in which it is unequally distributed is that a larger proportion of marginalized people have experienced the kind of trauma that could cause them to be “triggered.”

An environment of traditional academic freedom to set one’s own readings, say or entertain discussions they touch upon painful and difficult topics, and trust that “the answer to free speech is more free speech” is, in this account, fine and good for those who have benefited from white privilege, they having been taught from a young age that their opinions matter, that they can express them without fear, and without their having to carry the burden of trauma that may be re-provoked by careless treatment of painful subjects.

Safe spaces and microaggressions, trigger warnings and affirmative action: all are premised in this basic (and I think in some measure correct) argument that we didn’t all get here from the same place, we are not all in fact here in the same here exactly, some of us got here hurt and damaged from wrestling with injustice, and a “fairness” that asks everyone to line up for a footrace when some have been kneecaped is not very fair at all, really. “The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges, to beg in the streets, and to steal bread.”

Accepting that there is something here, something that is not just self-pity, or a cunning attempt to dictate the terms of the classroom discussion by professions of weakness, it is still not clear to be that pre-labeling texts as racist or sexist or classiest is actually helpful. 

There is already in the modern American academy a strong tendency to interpret all texts through the prism of injustice and oppression. And that is an important frame, and I don’t wish to slight it, but a good text is so much, much, much more than that, that I am fearful that mandating labels before the student has taken that very first step toward interpretation risks making the true power and wonder of the text, of the author’s creation, harder to access, harder to know.

It is the responsibility of students and instructors to ultimately pick apart texts: to analyze, criticize, and place in context the authors ideas, subjects, words and context. One issue, alluded to above, is that trigger warnings take that work out of the students’ hands and present the answer to them in the syllabus: this work is racist, classiest, and sexist. It comes pre-judged.

Another issue, though, is that analysis and criticism should follow some sincere effort to inhabit the text and to understand the author’s world and concerns. A text should be approached, in other words, as if it might teach you something. It may be uncomfortable to approach a text in that way: exposing ourselves to different ways of thinking usually is. But we may do no good service to marginalized students by impeding this act of empathy: Fewer may be surprised by a textual “microaggression,” but by always beginning with their grievance placed between themselves and the text, will fewer grab hold of the texts and take ownership of them, fulling participating in the texts that constitute their cultural capital too? Will fewer be able to say, with Richard Rodriguez "I have taken Caliban's advice. I have stolen their books"?

I tend to think that we should strengthen marginalized students in other ways than this. The study of gender, race, ethnicity and so on in relation to the Western canon has exploded over the last 30 years; those pioneers were not silenced by the naked texts, and I would not expect their heirs to be paralyzed either. Leave warnings regarding disturbing content to the discretion of the instructors; support those with PTSD with better mental health services and case-by-case accommodations; increase minority attendance, and minority presence in the faculty and the administration, which will do more to un-silence marginalized communities in the academy than any doctrine of labeling. Absence evidence of benefit, don't impose scarlet letters of thoughtcrime on texts; this will impede good reading, which alone makes the text a part of the student and their story -- an act which is not only vital to education but is, especially for the young, especially the marginalized, a source of power, part of the flowering of them and their strength.

Monday, August 1, 2016

Rancid wine in cracked bottles: The Tim Ball story

Tim Ball is spinning his wheels at WUWT, pushing this long-discredited, mendacious presentation of Greenland ice core data:

This deception was previously employed by Don Easterbrook:

. . . regarding which, Skeptical Science has already pointed out the rather glaring denier falsehood:
Easterbrook plots the temperature data from the GISP2 core, as archived here. Easterbrook defines “present” as the year 2000. However, the GISP2 “present” follows a common paleoclimate convention and is actually 1950. The first data point in the file is at 95 years BP. This would make 95 years BP 1855 — a full 155 years ago, long before any other global temperature record shows any modern warming. In order to make absolutely sure of my dates, I emailed Richard Alley, and he confirmed that the GISP2 “present” is 1950, and that the most recent temperature in the GISP2 series is therefore 1855. . . .

Unfortunately for Don, the first data point in the temperature series he’s relying on is not from the “top of the core”, it’s from layers dated to 1855. The reason is straightforward enough — it takes decades for snow to consolidate into ice.
Of course both Easterbrook and now Ball are also playing that ever-popular denier game of pretending one regional temperature record can stand as a proxy for global temperatures. But what really touches the nonsense into immortality is treating 1855 temperatures as modern-day temperatures, and denying global warming on that basis.

Have the razor-sharp critical intellects at WUWT picked up on this rather obvious deception? Not in the first hundred comments, which are mostly deniers arguing about whether warming is caused by a 100,000-year-cycle (rather than CO2) or a 21,000-year-cycle (rather than CO2). Tom Dayton points out the obvious, but "AndyG55" is ready with his unanswerable comeback: Michael Mann is a stupidhead LOL:

 Steve McIntyre splits the baby in his own inimitable style, warning deniers away from the data set without stating the obvious point that YOU CAN'T REFUTE GLOBAL WARMING WITH A RECORD THAT STOPS IN 1855.

 Other than that, it's the same old nonsense. Someone introduces the idea of a 41,000-year-cycle, so they chase that tennis ball for a while. Then, in response to some mealy-mouthed word salad by Easterbrook himself, Nick Stokes, who is somehow not banned from the monkey house, owns him and Steve McIntyre in one go:

. . . and that actually shuts up these venerable patriarchs of weaponized ignorance. Bravo, Nick. Bravo.

Thursday, June 2, 2016

We can do 100% renewables. But we probably shouldn't.


 Peter Sinclair has a post up taunting "renewable haters" who are invited to be embarrassed that nuclear plants, under pressure from cheap natural gas, may require public money to stay in operation. Following hard on the heels of that, Exelon has announced the shuttering of the Clinton and Quad Cities nuclear plants, 3GW of near-zero carbon energy gone for want of $110 million in subsidy per year (which is the combined losses of the two plants in the current market.)

As an enthusiastic taunter of those I feel deserve it, I know the people Sinclair is talking about: people who position nuclear as the honest, work-a-day, practical solution, where as renewables are impractical fairy dust, a con sustained by massive public money. Which is ridiculous on all counts: nuclear energy has always required public support, with the government providing most of the R&D, permanent waste disposal at bargain prices (how's that coming, guys?), loan guarantees, even free insurance against the possibility of a meltdown. Meanwhile wind has reached 5% of US electricity production: sunny counties and regions, such as Jordan, are finding solar energy profitable without subsidies, as prices for modules continue to fall.

But the vices of nuclear advocates should not be confused with the virtues of nuclear energy. And just because we can build a 100% RE grid, does not mean we should.

Looking out into the world today, it is obviously imperative to get human civilization to net zero or net negative GHG emissions as soon as possible. Every year, every month that we don't pushes us further into the heart of a global disaster.

Renewables require careful load-balancing across large areas, storage, and dynamic demand management to begin to approach 100% of the energy supply. Contrawise, every 1% of baseload power you add makes the intermittent load easier to manage and cheaper overall. Science of Doom has a great post on the math here, and it's worth quoting his conclusion at some length:

What is the critical problem? Given that storage is extremely expensive, and given the intermittent nature of renewables with the worst week of low sun and low wind in a given region – how do you actually make it work? Because yes, there is a barrier to making a 100% renewable network operate reliably. It’s not technical, as such, not if you have infinite money..
It should be crystal clear that if you need 500GW of average supply to run the US you can’t just build 500GW of “nameplate” renewable capacity. And you can’t just build 500GW / capacity factor of renewable capacity (e.g. if we required 500GW just from wind we would build something like 1.2-1.5TW due to the 30-40% capacity factor of wind) and just add “affordable storage”.
So, there is no technical barrier to powering the entire US from a renewable grid with lots of storage. Probably $50TR will be enough for the storage. Or forget the storage and just build 10x the nameplate of wind farms and have a transmission grid of 500GW around the entire country. Probably the 5TW of wind farms will only cost $5TR and the redundant transmission grid will only cost $20TR – so that’s only $25TR.
Hopefully, the point is clear. It’s a different story from dispatchable conventional generation. Adding up the possible total energy from wind and solar is step 1 and that’s been done multiple times. The critical item, missing from many papers, is to actually analyze the demand and supply options with respect to a time series and find out what is missing. And find some sensible mix of generation and storage (and transmission, although that was not analyzed in this paper) that matches supply and demand.

What's more, baseload renewable sources such as geothermal, hydroelectric dams, and tidal power, all require large areas with appropriate geography (and geology) to be successful. Geothermal and tidal power are starting from an extremely small base, while hydroelectric dams (which have significant environmental costs of their own) are already close to their saturation point.

Compare the Exelon plants, Clinton and Quad Cities. Their combined capacity is 3GW, which at the industry-standard 0.9 capacity factor is roughly 24,000 MW-h per year. Those two plants, alone, produce more GWh of electricity than all the geothermal plants in the nation, combined. They produce more clean energy than all the utility solar plants in the nation, combined. That would be a bargain for a tiny subsidy of $100-150 million a year. It comes to about $0.05/kWh. We could subsidize our entire electrical grid to that extent and spend less than 2% of the GDP.

Nuclear energy is, by far, the largest source of low-carbon energy in the United States. Doubling or tripling our capacity could be done easily with the political will to do so. At a bare minimum, we should be maintaining the plants we have to the end of their useful life. Subsidies aren't a dirty word here. At least until we have a comprehensive carbon tax, all low-carbon energy will require subsidies or unfunded mandates, including wind and solar, especially once they reach a scale where their fluctuations necessitate storage.

Different countries and regions with different resources, relationships, and geography are going to need different mixes of sources to get to net zero. Ruling out either more RE or more nuclear seems irresponsible to me.

Thursday, May 5, 2016

SteveF makes a hash of climate sensitivity; I propose a solution


Over at the Blackboard they are hawking a “heat balance based empirical estimate of climate sensitivity” which delightfully uses the IPCC's own numbers to show that climate sensitivity has got to be low!!! OMG!!! 
I will show how the IPCC AR5 estimate of human forcing (and its uncertainty) leads to an empirical probability density function for climate sensitivity with relatively “long tail”, but with a most probable and median values near the low end of the IPCC ‘likely range’ of 1.5C to 4.5C per doubling.
And how is Steve going to do that? Well, he's going to give us both barrels of the lukewarmer shotgun, oversimplification and argument from incredulity.

Let's leave aside for today the argument from incredulity (in which his own method produces a fat tail of dangerously high climate sensitivities, and he says, basically, "but that can't be true, because hand-waving") and look at the oversimplification at the heart of SteveF's method for estimating climate sensitivity.
We can translate any net forcing value to a corresponding estimate of effective climate sensitivity via a simple heat balance if:
1) We know how much the average surface temperature has warmed since the start of significant human forcing.
2) We assume how much of that warming has been due to human forcing (as opposed to natural variation).
3) We know how much of the net forcing is being currently accumulated on Earth as added heat.
The Effective Sensitivity, in degrees per watt per sq meter, is given by:
ES = ΔT/(F – A)      (eq. 1)
That is fairly close to true, leaving aside changes in the albedo of the earth over time, and problems with taking the average temperature, which I discuss below. The chief problem is that he doesn't know any of those things with sufficient accuracy to constrain climate sensitivity in any meaningful way.

Climate scientists who do actual work with the climate are doing a fine job of reducing the uncertainty of these numbers, but in every case, the opening move is always to average available measurements over a period of time. But a heat balance model by definition requires accurate accounting of how much heat is coming in and how much is going out, and those numbers are changing over time.

How much has the average surface temperature warmed since the start of significant anthropogenic forcing? Right now, the answer is about 1.5C (not 0.9C, as Steve estimates). El Nino will subside and that number will (temporarily) fall, but that is beside the point: if you are using present-day forcings, then you have to use present-day temperatures.

The same goes for ocean heat uptake: if you are going to compare that to surface temperatures, you need to know what the uptake was at the moment when you took values for the forcings and the total warming. That's tricky, because we know ocean heat uptake varies significantly over time. It's lower than usual right now, because of El Nino, but how low?

If you are going to use ocean heat uptake averaged over X number of years (which to my understanding is basically mandatory to get any kind of an accurate number), then you also have to average the net forcing over those same years, and you also have to average the warming compared to preindustrial over those years.

But simple averaging still will not work, because heat loss varies with temperature to the fourth power. An average warming of +0.9C could reflect a steadily linear increase, or a long period of flat temperatures followed by a prolonged spike in temps to +4C. The latter will radiate more heat into space than the former. The average temperature is the same, but the heat balance is not the same. So we had better stick to instants of time if this method is to have any hope of delivering accurate results.

To reiterate: for an estimate of heat balance to give you climate sensitivity, you need the heat balance AT THAT MOMENT, not averaged over time. The amount of heat the oceans were absorbing ten or twenty years ago can only be compared to the temperature ten or twenty years ago, and the net forcing of ten or twenty years ago. If you are comparing the average heat uptake by the oceans since 1993 with the last 13 years of temperatures and forcing estimates from a moment in 2010, you are comparing apples and oranges. All of these things change over time, and to use the relationship between them to estimate climate sensitivity, only contemporaneous estimates can hope to be valid.

Consider a building occupied by an unknown number of people. You want to know how many people are inside. If you know how many people when in the building at midnight Tuesday, how many entered on Wednesday, and how many left on Wednesday, you know how many are in the building when Thursday dawns (assuming no births or deaths, nerds.)

On the other hand, knowing how many people started in the building, the number who left the building on Sunday, and the average number of people entering the building each day over the last year, doesn’t help you a hell of a lot. But this is what Steve has tried to do with his heat balance based empirical estimate of climate sensitivity.”

For this to work, do you need to use the present moment? No, you do not. In fact, there may be excellent reasons to use some instant in the past, such as the benefit of hindsight in estimating warming or forcings or the presence of an exceptional event (like a large volcanic eruption) that lets you really test some of your theories about forcings and heat balance and temperature.

In other words, you need a series of “moments,” for which you estimate the levels of various forcings and the heat uptake of the oceans. Then you can predict what the temperature “should” be, based upon the inputs, and compare it to what the temperature was (and is.)

Since the act of resolving the state of the climate in each of these moments is one fairly powerful method of determining the inputs for the next “moment” (and whether it can do so, compared to historical observations, is a good test of how accurate it is) we might want to calculate a series of moments, each derived from the one before, based upon our best estimates of forcings, ocean heat uptake, and the like. 

Since everything we said about average temperatures over time could also be said about temperatures averaged over the global surface (i.e., that different local temperatures can yield the same average, but different heat loss) we had better break the earth into boxes, or "cells" and calculate temperature, heat uptake, heat loss, etc., for each cell.

Not all the same color

With these modest adjustments the calculations will have a much better chance of doing what Steve wants them to do, which is to take measurements of the climate system, account for ocean heat storage, and estimate climate sensitivity.

I call it a climate model. Trademark pending.

Wednesday, April 27, 2016

Punch-drunk with warming, WUWT branches out to being wrong about Palestine

2014 was the warmest year on record. Until 2015. 2015 looks to hold the title until 2016 goes in the books. Massive coral bleaching, accelerated sea level rise. To paraphrase Cedric Coleman, it's hard out there for a denier.

Eric Worrall of the monkey house tries to paper over the gaping hole in the movement's foundation with a "jazz hands" routine about Mahmud Abbas, who took the opportunity presented by a UN signing ceremony for a climate accord to call out Israeli settlements in the occupied territories:
"The Israeli occupation is destroying the climate in Palestine and the Israeli settlements are destroying nature in Palestine," Abbas told the gathering of 175 countries signing a landmark climate deal.
Eric chooses to pretend that Abbas claimed Israeli settlements cause global warming:
 Regardless of your position on the Israel / Palestine situation, suggesting that Israeli settlements contribute significantly to global warming is utterly implausible.
Which of course is probably why he didn't say that. He said the occupation was damaging the environment in Palestine, which it most certainly is. Israelis have established over 200 illegal settlements in the West Bank alone. These communities establish themselves strategically to control resources or strategic points and to restrict the development of Palestinian communities. They are not established via any rational planning process, which would certainly not place hundreds of tiny towns, some just campers and trailers running off diesel generators, scattered over thousands of square kilometers. This is indeed destructive to the environment in the West Bank.

Saturday, March 12, 2016

Lucia's sadly selective statistical showpersonship

Lucia Liljegren is spending the twilight of lukewarmism as a non-laughable position mostly posting recipes, and notifying her followers about the arrival of major holidays (three of her last ten posts.) But 'twas not always thus. During the "hiatus" Lucia was fond of comparing the IPCC's multi-modal mean with global temperatures, despite the fact that these were models of climate, not weather forecasts, and that patient people had explained to her over and over that "about 0.2C/decade over the next several decades" was not a prediction one could falsify based on a few years of data.

She liked making this mistake so much, she did it again and again and again (and again and again…and again.) But this all stopped rather abruptly in November of 2014, which funnily enough was exactly the time El Nino came to a stuttering start after a unprecedented 50-month absence:

As a result, every month after November 2014 (anomaly 0.68C) has been hotter (anomaly-wise) than November 2014 itself:

And yet despite the rather dramatic turn in the data, and despite the fact that she liked making this comparison well enough to make it over and over again with different temperature records and updates to the present, she never updated her final graph, which looked like this:

Why did Dr Liljegren suddenly lose interest in this exercise? Why the statistical-torture hiatus? We may never know, but said graph with more recent GISTEMP measurements superimposed looks like this:

It's a mystery, really.

UPDATE: MartinM has better graph-fu than I and has updated lucia's graph of the multi-model mean vs the 13-month mean.


Monday, January 4, 2016

Announcement of new elements allows WUWT dittoheads to flex their ignorance

Every now and again, Anthony likes to cut-n-paste a science press release, to try and delude himself and his readers into thinking his website has something to do with science other than whining about it. Unfortunately, changing the subject merely serves to illustrate the pathetic science illiteracy and reactionary politics of his little band.

Recently actual scientists were able to synthesize four new elements -- creating materials never seen on earth before. And Anthony, "citizen scientist," was able to copy their press release!

This should be easy. Not much is required of the commentors by way of response here. Scientists toiled for years in obscurity and today they leave their mark on history. Yay science! Unfortunately, this simple PR exercise is beyond the ken of Tony's tinfoil hat brigade.

Needless to say, "just get past…around 110" is not anyone's idea of how to synthesize stable superheavy elements, which is still very much a thing. But perhaps I am getting sidetracked from "JPS"'s main point, which seems to be: Scientists were wrong about an island of stability (wrong) so global warming is a lie!

Multiple dim bulbs simply reject the idea that anything has been discovered. It's just another hoax!

"Mark" one goes off on a weird tangent about element names, but "Mark" two stands ready to pull the discussion back to what the site is all about:

Having belittled homosexuals, the tinfoil hats decide it's time to bring up slavery and the New World Order:

I'd like to reiterate: this is a puff piece about a feel-good story about the discovery of new elements. But these deniers can't get through a simple press release on a totally non-climate-related subject without devolving into an anti-intellectual, homophobic, paranoid cri de coeur. It does not make one hopeful for their output of the course of the rest of this, an election year.