Part 1: Compare and contrast
“David Cameron has launched his most audacious bid yet to capture Labour's political ground, claiming that the Tories are the true party of ‘working people’ in Britain. …Cameron says: ‘We must show that, unlike Labour, we will be a party that is for working people, not rich and powerful vested interests.’”
“David Cameron has raised more than £100,000 from a private club which offers fundraising lunches at the House of Commons. Oxfordshire residents are charged £480 a year for membership of the club, of which the main perk is two private lunches in the parliamentary dining rooms. … Sir Philip Mawer, the parliamentary standards commissioner, has launched a formal investigation into the Conservative dining clubs. It is against parliamentary rules to use the dining rooms for fundraising.”
Well, that’s a bit rich.
Part 2: ‘Negative, moi? Non!’
Cameron also accuses Labour of “incompetence” and “untrustworthiness” and generating “disgust”. He warns that in 2007, “Labour’s dark side” will come to the fore, in the shape of Gordon Brown.
He adds, speaking through a mouthful of unmelting butter: “we need to prepare ourselves for an onslaught of negative campaigning”.
(Cameron has previously said that Brown is “weak”, “tragic” and “laughable”, and that “I’m fed up with the Punch and Judy politics of Westminster, the name calling, backbiting, point scoring, finger pointing.”)
I do hope I’m not being too negative in suggesting that Cameron’s a hypocrite. That would be simply awful.
Sunday, December 31, 2006
Friday, December 29, 2006
2006: year of the Cameron?
David Cameron is often described as the Conservative Tony Blair: a young, charismatic leader of the opposition; very media-friendly but thin on policy detail; moving his party towards the centre to attract new voters; disgruntling some of the party’s traditional supporters in the process; but succeeding in getting better opinion poll ratings than his predecessors. Blair swept his party to unstoppable victory.
The analogy works to an extent. But all such analogies break down when you look closely enough.
Let’s start with popularity. It’s hard to compare Cameron’s first year as leader with Blair’s first year, because the pollsters have changed their methodologies since then to reduce their pro-Labour bias. What we can, tentatively, do is look at the changes in the parties’ poll ratings that occurred during each leader’s first year.
Blair became Labour leader in July 1994; Cameron became Tory leader in December 2005. Using voting intention data available at UK Polling Report, I’ve calculated baseline figures for each party’s vote in the three months before the new leader took charge (Mar-Jun 94 and Sep-Nov 05). I then worked out average poll ratings over the following three months and then each succeeding quarter (Aug-Oct 94, Nov 94-Jan 95, Feb-Apr 05, May-Jul 95; and Jan-Mar 06, Apr-Jun 06, Jul-Sep 06, Oct-Dec 06).
I’m averaging over quarterly periods to smooth down the meaningless month-to-month changes that come as a result of sampling error. As I said, pollsters in the mid-1990s all exaggerated Labour’s support, most to ludicrous levels. But ICM distinguished itself by adjusting its figures to take account of ‘shy Tories’. Its Labour leads were consistently the smallest and, come the 1997 election, the most accurate. So, for 1994-95, I’ll only look at the ICM figures.
Nowadays, there is adjustment aplenty. As Mike Smithson* of PoliticalBetting.com notes, ICM, YouGov and Populus all weight their results to deal with sample bias. So for contemporary figures, I’ve used these three pollsters.
Here are the quarter-to-quarter percentage point changes in voting intention share, as well as the overall change from the baseline to the fourth quarter:
Labour 1994-95
ICM
Base-Q1 +3.2, Q1-Q2 +1.0, Q2-Q3 +2.0, Q3-Q4 +0.8
Base-Q4 +7.0
Conservatives 2006
ICM
Base-Q1 +3.5, Q1-Q2 0, Q2-Q3 +1.6, Q3-Q4 +1.0
Base-Q4 +6.1
YouGov
Base-Q1 +4.9, Q1-Q2 -0.8, Q2-Q3 +1.0, Q3-Q4 -0.4
Base-Q4 +4.7
Populus
Base-Q1 +3.7, Q1-Q2 +0.3, Q2-Q3 -0.3, Q3-Q4 -0.7
Base-Q4 +3.0
Average across pollsters
Base-Q1 +4.0, Q1-Q2 -0.2, Q2-Q3 +0.8, Q3-Q4 0
Base-Q4 +4.6
Two things stand out: first, each leader started strongly but then lost momentum as the year went on; second, while Cameron made more of an initial leap than Blair, his loss of momentum has also been greater – he ended his first year with less overall progress than Blair made.
As I said, Cameron is often seen as the new Blair. But these days, people are pretty sick of Blair: they may think back to how persuasive he seemed in the early days, and reflect that the flipside of presentational brilliance may be untrustworthiness (as they see it). They may be keen to not be fooled again, and may fear that Cameron could turn out much the same. This may be why he has lost momentum sooner than Blair did. Two of the three pollsters above show the Tories actually losing ground in recent months. (For the record, the Blair Q5 figures show a drop back.)
But there are other dissimilarities.
When Blair became leader, Labour was already well ahead. When Cameron became leader, the Tories were moderately behind. As I’ve said, the historical and contemporary ratings can’t be directly compared, but the huge difference is at least indicative: the pre-Blair baseline was a Labour lead of 15.8% (ICM); the pre-Cameron baseline had the Conservatives trailing by between 5.5% (ICM) and 6.7% (Populus).
This means that the Tories have not only made less progress than Labour did under their new leader, but also that they need, given their poorer starting position, to make much more progress. If Cameron is indeed at or near his peak, he’s in trouble.
And there may be more trouble afoot.
Blair did things that often disconcerted and worried the Labour core vote. But the fact that he was pulling in such huge poll leads and council election landslides provided reassurance, and so he was able to make his modernising centrist pitch without losing the left. Cameron is trying to pull a similar trick from the right. But his party’s poll leads are only slim to moderate, and the Tory grass roots are increasingly rumbling with discontent; these two facts are related.
Blair’s strong starting position – and his extremely strong early results – gave him permission to do things that sometimes offended the party loyalists. He gave every impression of becoming a big electoral winner, and so the Labour grass roots forgave him a lot.
The ‘Cameron effect’ – in light of his poor baseline and limited momentum – has so far amounted to only modest leads. He has not created anything like the sort of popular approval comfort zone that Blair had. Because of this, the Tory core vote is far less forgiving of his modernising pitch towards the centre. This means that the coalition Cameron needs to forge is going to be harder to hold together.
Readers should be careful to avoid falling off their chairs laughing, as I am about to use the words ‘UKIP’ and ‘credibility’ in the same sentence.
Blair did face an electoral challenge from the left: Arthur Scargill splintered off to form the Socialist Labour Party. It bombed. Cameron’s right flank is far more exposed, notably to UKIP. This party has been established for over a decade; it has a troop of MEPs, small but noticeable poll ratings, and general election votes high enough to frustrate the Tories in marginals. UKIP has more credibility with its target voters than the SLP ever did with its. So Cameron has much less scope to take the Tory right for granted.
One final difference between the Tories’ situation today and Labour’s in the mid-1990s has to do with different policy areas.
The areas where Labour most needed to improve its standing – crime and the economy – also happened to be the issues of most general importance to the electorate. If people think you can’t handle these, then popular health and education policies won’t get you very far. As a result, Blair and Brown’s efforts to talk up toughness on crime and economic stability were useful for two reasons: strengthening weak spots and boosting Labour’s perceived relevance.
This dynamic was greatly aided by the fact that the Major Government’s record on traditional Labour issues such as health and education was widely held in contempt. The Tories posed no threat in these areas, and so Labour’s relative downplaying of these themes didn’t really matter. The Tories had also helped Labour’s strategy enormously by shredding their own reputation for economic competence. All of these factors came together to enable and reward Labour’s repositioning.
Cameron doesn’t have this advantageous dovetailing; for him, the rebranding priority is to shed the ‘nasty party’ image. To do this, he has been focusing on ‘caring’ issues such as poverty and the environment to the relative neglect of crime and the economy. As a result, he risks making the Tories seem more peripheral on the big issues – on which Labour has a more convincing grip today than did the Tories around 1994-95.
To recap: Cameron started out needing to make more progress than Blair did; he has so far made less progress than Blair did, and is showing signs of stalling sooner; he has less electoral room for manoeuvre between his right and the centre; and his repositioning strategy focuses on areas likely to have less payoff.
2006 was the year of the Cameron. But despite his accomplished showmanship and amiable manner, there are good grounds for doubting that 2007, 2008 and 2009 will also be.
The Labour defeatists who are already half-wishfully writing off the next election are not just self-indulgent: they’re mistaken. Headline-wise, 2006 has been a horrendous year for Labour – but if people were truly sick of this Government, then the polls would be horrendous too.
* Mike also carried out a similar Blair vs Cameron exercise recently. His differs from mine in: (a) beating me to it; (b) missing out – as a result of (a) – on the latest polls; (c) looking at MORI and ICM polls from both periods; (d) looking at monthly rather than quarterly variations; and (e) knowing how to do nice graphs.
The analogy works to an extent. But all such analogies break down when you look closely enough.
Let’s start with popularity. It’s hard to compare Cameron’s first year as leader with Blair’s first year, because the pollsters have changed their methodologies since then to reduce their pro-Labour bias. What we can, tentatively, do is look at the changes in the parties’ poll ratings that occurred during each leader’s first year.
Blair became Labour leader in July 1994; Cameron became Tory leader in December 2005. Using voting intention data available at UK Polling Report, I’ve calculated baseline figures for each party’s vote in the three months before the new leader took charge (Mar-Jun 94 and Sep-Nov 05). I then worked out average poll ratings over the following three months and then each succeeding quarter (Aug-Oct 94, Nov 94-Jan 95, Feb-Apr 05, May-Jul 95; and Jan-Mar 06, Apr-Jun 06, Jul-Sep 06, Oct-Dec 06).
I’m averaging over quarterly periods to smooth down the meaningless month-to-month changes that come as a result of sampling error. As I said, pollsters in the mid-1990s all exaggerated Labour’s support, most to ludicrous levels. But ICM distinguished itself by adjusting its figures to take account of ‘shy Tories’. Its Labour leads were consistently the smallest and, come the 1997 election, the most accurate. So, for 1994-95, I’ll only look at the ICM figures.
Nowadays, there is adjustment aplenty. As Mike Smithson* of PoliticalBetting.com notes, ICM, YouGov and Populus all weight their results to deal with sample bias. So for contemporary figures, I’ve used these three pollsters.
Here are the quarter-to-quarter percentage point changes in voting intention share, as well as the overall change from the baseline to the fourth quarter:
Labour 1994-95
ICM
Base-Q1 +3.2, Q1-Q2 +1.0, Q2-Q3 +2.0, Q3-Q4 +0.8
Base-Q4 +7.0
Conservatives 2006
ICM
Base-Q1 +3.5, Q1-Q2 0, Q2-Q3 +1.6, Q3-Q4 +1.0
Base-Q4 +6.1
YouGov
Base-Q1 +4.9, Q1-Q2 -0.8, Q2-Q3 +1.0, Q3-Q4 -0.4
Base-Q4 +4.7
Populus
Base-Q1 +3.7, Q1-Q2 +0.3, Q2-Q3 -0.3, Q3-Q4 -0.7
Base-Q4 +3.0
Average across pollsters
Base-Q1 +4.0, Q1-Q2 -0.2, Q2-Q3 +0.8, Q3-Q4 0
Base-Q4 +4.6
Two things stand out: first, each leader started strongly but then lost momentum as the year went on; second, while Cameron made more of an initial leap than Blair, his loss of momentum has also been greater – he ended his first year with less overall progress than Blair made.
As I said, Cameron is often seen as the new Blair. But these days, people are pretty sick of Blair: they may think back to how persuasive he seemed in the early days, and reflect that the flipside of presentational brilliance may be untrustworthiness (as they see it). They may be keen to not be fooled again, and may fear that Cameron could turn out much the same. This may be why he has lost momentum sooner than Blair did. Two of the three pollsters above show the Tories actually losing ground in recent months. (For the record, the Blair Q5 figures show a drop back.)
But there are other dissimilarities.
When Blair became leader, Labour was already well ahead. When Cameron became leader, the Tories were moderately behind. As I’ve said, the historical and contemporary ratings can’t be directly compared, but the huge difference is at least indicative: the pre-Blair baseline was a Labour lead of 15.8% (ICM); the pre-Cameron baseline had the Conservatives trailing by between 5.5% (ICM) and 6.7% (Populus).
This means that the Tories have not only made less progress than Labour did under their new leader, but also that they need, given their poorer starting position, to make much more progress. If Cameron is indeed at or near his peak, he’s in trouble.
And there may be more trouble afoot.
Blair did things that often disconcerted and worried the Labour core vote. But the fact that he was pulling in such huge poll leads and council election landslides provided reassurance, and so he was able to make his modernising centrist pitch without losing the left. Cameron is trying to pull a similar trick from the right. But his party’s poll leads are only slim to moderate, and the Tory grass roots are increasingly rumbling with discontent; these two facts are related.
Blair’s strong starting position – and his extremely strong early results – gave him permission to do things that sometimes offended the party loyalists. He gave every impression of becoming a big electoral winner, and so the Labour grass roots forgave him a lot.
The ‘Cameron effect’ – in light of his poor baseline and limited momentum – has so far amounted to only modest leads. He has not created anything like the sort of popular approval comfort zone that Blair had. Because of this, the Tory core vote is far less forgiving of his modernising pitch towards the centre. This means that the coalition Cameron needs to forge is going to be harder to hold together.
Readers should be careful to avoid falling off their chairs laughing, as I am about to use the words ‘UKIP’ and ‘credibility’ in the same sentence.
Blair did face an electoral challenge from the left: Arthur Scargill splintered off to form the Socialist Labour Party. It bombed. Cameron’s right flank is far more exposed, notably to UKIP. This party has been established for over a decade; it has a troop of MEPs, small but noticeable poll ratings, and general election votes high enough to frustrate the Tories in marginals. UKIP has more credibility with its target voters than the SLP ever did with its. So Cameron has much less scope to take the Tory right for granted.
One final difference between the Tories’ situation today and Labour’s in the mid-1990s has to do with different policy areas.
The areas where Labour most needed to improve its standing – crime and the economy – also happened to be the issues of most general importance to the electorate. If people think you can’t handle these, then popular health and education policies won’t get you very far. As a result, Blair and Brown’s efforts to talk up toughness on crime and economic stability were useful for two reasons: strengthening weak spots and boosting Labour’s perceived relevance.
This dynamic was greatly aided by the fact that the Major Government’s record on traditional Labour issues such as health and education was widely held in contempt. The Tories posed no threat in these areas, and so Labour’s relative downplaying of these themes didn’t really matter. The Tories had also helped Labour’s strategy enormously by shredding their own reputation for economic competence. All of these factors came together to enable and reward Labour’s repositioning.
Cameron doesn’t have this advantageous dovetailing; for him, the rebranding priority is to shed the ‘nasty party’ image. To do this, he has been focusing on ‘caring’ issues such as poverty and the environment to the relative neglect of crime and the economy. As a result, he risks making the Tories seem more peripheral on the big issues – on which Labour has a more convincing grip today than did the Tories around 1994-95.
To recap: Cameron started out needing to make more progress than Blair did; he has so far made less progress than Blair did, and is showing signs of stalling sooner; he has less electoral room for manoeuvre between his right and the centre; and his repositioning strategy focuses on areas likely to have less payoff.
2006 was the year of the Cameron. But despite his accomplished showmanship and amiable manner, there are good grounds for doubting that 2007, 2008 and 2009 will also be.
The Labour defeatists who are already half-wishfully writing off the next election are not just self-indulgent: they’re mistaken. Headline-wise, 2006 has been a horrendous year for Labour – but if people were truly sick of this Government, then the polls would be horrendous too.
* Mike also carried out a similar Blair vs Cameron exercise recently. His differs from mine in: (a) beating me to it; (b) missing out – as a result of (a) – on the latest polls; (c) looking at MORI and ICM polls from both periods; (d) looking at monthly rather than quarterly variations; and (e) knowing how to do nice graphs.
Thursday, December 28, 2006
A good double-act needs good timing
There’s an interview in today’s Times with Labour deputy leader wannabe Alan Johnson. Talk turns to the prospect of Gordon Brown being unopposed for the position of leader:
“Some Labour figures worry that the Chancellor would be hampered by having to wait around while the party picks his deputy from a big field. But Mr Johnson said that Mr Brown could assume the leadership after the result of the deputy contest.”
This doesn’t convince me. Brown is currently handicapped by the need to wait out Tony Blair’s final months, unable to grab Labour’s agenda and drive it the way he wants to go – and all the while the Tories are concentrating their fire on him rather than on the lame duck in No. 10.
This would be even worse in the context of a leadership coronation at the same time as an election for the deputy post. Any lingering doubt about Brown’s succession would have been dispelled once nominations close, and for him to have to sit out, say, two months as leader-elect while the deputy hustings go up and down the country would rob him of momentum terribly at a crucial time.
The alternative would be for Brown to become leader immediately on close of nominations. This would have the virtue of avoiding a period of dead time leadership-wise, but would involve a different downside. He would be able to make all sorts of policy announcements as desired, but his ability to carry out a full government reshuffle would be sapped by the undecided deputy contest. Also, he would lose control of a key part of the agenda - namely, how deputy votes are cast and (probably more importantly) how they are sought. The election for the deputy won’t be meaningless, and if the leadership is already decided, a lot of media attention will shift there.
A number of key cabinet ministers, plus other runners, will be making speeches about their own vision for the party, and this will almost inevitably detract from whatever Brown is doing in the meantime. There will be a temptation to see whatever they say – publically to the wider membership and behind closed (yet leaky) doors to their parliamentary colleagues – as a contrast or even a challenge to Brown.
And there’s another problem. However the timing might work, it would be awkward for the deputy to have a popular party mandate while the leader lacked one. It’s clear that Brown would win any contested leadership election, but that abstract counterfactual won’t cut much ice.
True, a mandate from the members didn’t help Iain Duncan Smith; the lack of one didn’t harm Michael Howard. But the fact of an elected deputy – even one willing to be personally loyal – might undermine the position of an unopposed leader.
So Brown – and the party as a whole – could well benefit from a leadership contest. But it remains to be seen whether the left can unite to nominate a kamikaze candidate.
“Some Labour figures worry that the Chancellor would be hampered by having to wait around while the party picks his deputy from a big field. But Mr Johnson said that Mr Brown could assume the leadership after the result of the deputy contest.”
This doesn’t convince me. Brown is currently handicapped by the need to wait out Tony Blair’s final months, unable to grab Labour’s agenda and drive it the way he wants to go – and all the while the Tories are concentrating their fire on him rather than on the lame duck in No. 10.
This would be even worse in the context of a leadership coronation at the same time as an election for the deputy post. Any lingering doubt about Brown’s succession would have been dispelled once nominations close, and for him to have to sit out, say, two months as leader-elect while the deputy hustings go up and down the country would rob him of momentum terribly at a crucial time.
The alternative would be for Brown to become leader immediately on close of nominations. This would have the virtue of avoiding a period of dead time leadership-wise, but would involve a different downside. He would be able to make all sorts of policy announcements as desired, but his ability to carry out a full government reshuffle would be sapped by the undecided deputy contest. Also, he would lose control of a key part of the agenda - namely, how deputy votes are cast and (probably more importantly) how they are sought. The election for the deputy won’t be meaningless, and if the leadership is already decided, a lot of media attention will shift there.
A number of key cabinet ministers, plus other runners, will be making speeches about their own vision for the party, and this will almost inevitably detract from whatever Brown is doing in the meantime. There will be a temptation to see whatever they say – publically to the wider membership and behind closed (yet leaky) doors to their parliamentary colleagues – as a contrast or even a challenge to Brown.
And there’s another problem. However the timing might work, it would be awkward for the deputy to have a popular party mandate while the leader lacked one. It’s clear that Brown would win any contested leadership election, but that abstract counterfactual won’t cut much ice.
True, a mandate from the members didn’t help Iain Duncan Smith; the lack of one didn’t harm Michael Howard. But the fact of an elected deputy – even one willing to be personally loyal – might undermine the position of an unopposed leader.
So Brown – and the party as a whole – could well benefit from a leadership contest. But it remains to be seen whether the left can unite to nominate a kamikaze candidate.
Friday, December 22, 2006
Just a fit of bun
Welcome to... the Freemania Christmas Cryptic Spoonerism Bafflement Challenge!
The Oxford clergyman William Archibald Spooner was prone to make a certain sort of verbal slip: getting the starts of words mixed up. He once told a congregation to “come into the arms of the shoving leopard”. And, to an errant student: “You have hissed my mystery lecture. In fact, you have tasted the entire worm!”
Ho ho ho. Your challenge, should you have nothing better to do than accept it, is to solve the spoonerisms below. Each one has a crossword-type clue for both the spoonerism and the original phrase.
Example: Rooney fleeced by retiring bowler
Solution: Wayne shorn – Shane Warne
Over to you…
Health warning: spending too long trying to think of spoonerisms may lead you to mentally swap round every phrase you come across. Even as I typed that last sentence, I was thinking ‘Health warning… wealth horning? Is horning a word?’ This can cause mental anguish and impaired proof-reading ability. (Roof-preeding? Er, no.)
Answers in a few days. Merry Christmas!
[Update: answers here.]
The Oxford clergyman William Archibald Spooner was prone to make a certain sort of verbal slip: getting the starts of words mixed up. He once told a congregation to “come into the arms of the shoving leopard”. And, to an errant student: “You have hissed my mystery lecture. In fact, you have tasted the entire worm!”
Ho ho ho. Your challenge, should you have nothing better to do than accept it, is to solve the spoonerisms below. Each one has a crossword-type clue for both the spoonerism and the original phrase.
Example: Rooney fleeced by retiring bowler
Solution: Wayne shorn – Shane Warne
Over to you…
(1) Computing firm triumphed – congratulations!
(2) Pastry falls apart, so get some cereal
(3) A pale circle of Tories
(4) Acquired some cows after a legal fight
(5) Crazy placards display rudeness
(6) A sweet line on low furniture
(7) Avoiding marsupials? You’ll need these afoot
(8) Soviet Supreme gives laid-back prescription
(9) Smut! Exaggerate the sailor’s dance
(10) Criminals and goats in hidden corners
(11) Father gambled, but couldn’t pay up
(12) Beseech God to lead us to the queer march
(13) Free cheese in martial arts movies
(14) Offensive style of all-you-can-eat
(15) Young Simpson – warmth keeps him alive
(16) Superlative monkey-spank in Palestine
(17) Enhanced security for the Royal Mail
(18) Flailed in the tabloid of shameless nationalism
(19) Lady’s DNA rebelled without a cause
(20) Baking speed heats – but hold-ups for stockbrokers
Health warning: spending too long trying to think of spoonerisms may lead you to mentally swap round every phrase you come across. Even as I typed that last sentence, I was thinking ‘Health warning… wealth horning? Is horning a word?’ This can cause mental anguish and impaired proof-reading ability. (Roof-preeding? Er, no.)
Answers in a few days. Merry Christmas!
[Update: answers here.]
Thursday, December 21, 2006
This week…
I will mostly be marking the guesstimated birthday of some Middle Eastern bloke who apparently said a bunch of stuff that I don’t believe, and I will do so by means of consumption and idleness.
(Actually, I mark a lot of things with consumption and idleness. Like weekends. Or evenings.)
But work is now over until the new year, and thank Christ for that.
Tomorrow I’ll post a special festive game for any of you bored and desperate enough to care.
(Actually, I mark a lot of things with consumption and idleness. Like weekends. Or evenings.)
But work is now over until the new year, and thank Christ for that.
Tomorrow I’ll post a special festive game for any of you bored and desperate enough to care.
Turkmenistan leaderless
The weirdest dictator in the world has died. President Saparmurat Niyazov of Turkmenistan had a heart attack yesterday.
“Mr Niyazov established a cult of personality in which he was styled as Turkmenbashi, or Leader of all Turkmens. He renamed months and days in the calendar after himself and his family, and ordered statues of himself to be erected throughout the desert nation. Cities, an airport and a meteorite were given his name.”
More pressingly:
“According to Turkmen law, the president is succeeded by the head of the legislative body, the People's Assembly. But this post was held by Mr Niyazov himself.”
I’m particularly drawn to this story, as a couple of my friends have just this week got back from a holiday in Turkmenistan. It was quite a challenge for them to get two of the very rare visas issued to Western tourists, but they really wanted to check out this strange place while they had the chance (actually, J was the keen one; K had reservations about the whole ‘impoverished isolated tyranny’ aspect).
Their timing is eerie.
“Mr Niyazov established a cult of personality in which he was styled as Turkmenbashi, or Leader of all Turkmens. He renamed months and days in the calendar after himself and his family, and ordered statues of himself to be erected throughout the desert nation. Cities, an airport and a meteorite were given his name.”
More pressingly:
“According to Turkmen law, the president is succeeded by the head of the legislative body, the People's Assembly. But this post was held by Mr Niyazov himself.”
I’m particularly drawn to this story, as a couple of my friends have just this week got back from a holiday in Turkmenistan. It was quite a challenge for them to get two of the very rare visas issued to Western tourists, but they really wanted to check out this strange place while they had the chance (actually, J was the keen one; K had reservations about the whole ‘impoverished isolated tyranny’ aspect).
Their timing is eerie.
Wednesday, December 20, 2006
Security, cohesion and globalising identity
Pauline Neville-Jones, of the Tories’ Security Policy Group, has put out an interesting paper. I was struck by this bit:
“The fact that the [7/7] bombers were born in Britain shocked us into realising the connection between security and community cohesion. The fact that the bombers were radicalised in part by events outside the United Kingdom forced us to recognise that foreign affairs have become domestic affairs. It is no longer possible to look at domestic security policy and foreign policy separately from each other.”
This says more or less same thing, and makes the same mistake, as a report from the thinktank Demos earlier this month:
“terrorism is a social and political phenomenon that needs local roots to take hold. The international network and the concept of the ‘umma’ – the global community of which every Muslim is a part – are important features of al Qaida, but distant and global concerns can gain currency only when they are able to feed off local, everyday, personal grievances, such as those experienced by Muslims in the UK.”
…
“While factors such as foreign policy and the Middle East are important, they will have no traction unless they can be linked to sources of grievance and anger closer to home, such as the poverty and discrimination suffered by the Muslim community in the UK.”
The Demos report’s very interesting, with a lot of information and non-hysterical discussion (which, in this context, is nice). But I think part of the analysis goes awry.
The writers are dead right (as is Neville-Jones) that foreign policy and domestic policy are interacting and conceptually blurring far more than we are used to, in large part because immigration means that more of the UK public identify and connect with countries of origin/ancestry:
“With the advent of satellite television, cheap air fares and the 24/7 global media, even poor and deprived communities are able to maintain very close relationships with their home countries in ways that were not possible even a decade ago. This makes questions of loyalty and identity increasingly complex and means that influence and power can lie far from ‘home’ and beyond the control of national politicians in the UK.”
They also take pains to distinguish, within British Muslims, between angry political radicalism (quite common) and violent extremism of the al-Qaeda sort (very rare). This is an important and sometimes neglected distinction.
These two excellent points, though, undermine their argument for their central thesis (apparently shared by Neville-Jones), which is that issues of social cohesion, poverty and discrimination as experienced by Muslims in the UK are key to motivating terrorism.
First of all, their distinction between angry radicalism and violent extremism weakens their case for the link between local conditions and extremism.
Deprivation and discrimination in day-to-day life are certainly involved in motivating non-terrorist radicalism – think of the northern ‘race riots’ in the summer of 2001. Back in those days the media talked of ‘Asians’ rather than ‘Muslims’, and foreign policy as an issue of identity-based grievance was barely on the radar. Rioting is obviously violence rather than legitimate protest, but it is comprehensible as a more extreme version of the ordinary politics of the dispossessed; it’s more intelligible as the occasional extremity of radicalism than as some sort of ‘community terrorism’.
But if, as the Demos writers correctly note, most angry, radical Muslims (whether they stop at placards and chants or allow their protests to get rough) are not drawn into the extreme of terrorism. And indeed, those very few that have gone that way (think of Mohammed Siddique Khan) have not, relative to British Muslims overall, been particularly poor, ill-educated or separated from non-Muslim society.
Secondly, the writers (again correctly) observe the increasingly quick and easy exchange of news and opinions between Muslims in the UK and those in families abroad. But this undermines the importance they attach to the connection between awareness of injustice across the umma and of local Muslims experiencing poverty and discrimination. If the identity that terrorist recruiters play on is a global Islamic one, then injustice anywhere will do. And the easier it is to get information from across the world – from Pakistan to Gaza – the less motivationally important local conditions become.
Social cohesion policy is largely beside the point as counterterrorism. I’m going to do something out of character now (and which may earn me a kneecapping from the fiskers) and approvingly quote Madeleine Bunting on this:
“It is crucial to delink terrorism from the integration and diversity agenda. They have nothing to do with each other, so nail the myth… that integration is an anti-terrorism strategy. The least integrated are isolated, non-English-speaking mothers and grandmothers - hardly bomb-making material. Conversely, integration measured in education, employment or social life is no immunisation from the appeal of Islamist extremism - as the CVs of last year's London bombers showed.
“So go back to basics and reiterate that integration is about equality of opportunity, breaking down intergenerational cycles of poverty, and harmonious social relations. These goals may - or may not, depending on international affairs - reduce the appeal of terrorism in the long run, but any serious government should be interested in them in their own right, not simply as a means to the end of defeating terrorism.”
There’s also a very practical point to add to this: policies to promote integration could be futile or even counterproductive if they appear to be motivated by a fear that Muslims are potential terrorists. That would look cynically self-interested as well as deeply (and inaccurately) insulting. In this respect, the Demos report is rather unhelpfully subtitled ‘Community-based approaches to counter-terrorism’.
But, all that said, the broad point that foreign and domestic security policy has to be done differently these days is sound. I watched the excellent Thirteen Days last night, about the Cuban missile crisis, with one government carefully calculating its moves towards another. The world’s much less like that now, and not just because of changes to the balance of state power. There are increasing numbers and types of non-state actors on the world stage, and states’ identities become more problematic as their populations change more quickly than their institutions. New realities mean that old-fashioned realpolitik won’t work.
Immigration has made it harder for governments to use raw patriotism to rally support for self-interested foreign policy, and it has contributed to a complex system of transnational relations operating in parallel with the state system. This muddies the water of international relations (as well as the distinction between domestic and foreign), perhaps to the point where there is more mud than water.
Rational calculation based on an assessment of the opponent’s intentions becomes exponentially harder as the players become more diverse, as the boundaries blur, and as the information available becomes less and less adequate. National governments are still the biggest players, by a long way, but they are losing their ability to set – or even fully understand – the rules of the game.
“The fact that the [7/7] bombers were born in Britain shocked us into realising the connection between security and community cohesion. The fact that the bombers were radicalised in part by events outside the United Kingdom forced us to recognise that foreign affairs have become domestic affairs. It is no longer possible to look at domestic security policy and foreign policy separately from each other.”
This says more or less same thing, and makes the same mistake, as a report from the thinktank Demos earlier this month:
“terrorism is a social and political phenomenon that needs local roots to take hold. The international network and the concept of the ‘umma’ – the global community of which every Muslim is a part – are important features of al Qaida, but distant and global concerns can gain currency only when they are able to feed off local, everyday, personal grievances, such as those experienced by Muslims in the UK.”
…
“While factors such as foreign policy and the Middle East are important, they will have no traction unless they can be linked to sources of grievance and anger closer to home, such as the poverty and discrimination suffered by the Muslim community in the UK.”
The Demos report’s very interesting, with a lot of information and non-hysterical discussion (which, in this context, is nice). But I think part of the analysis goes awry.
The writers are dead right (as is Neville-Jones) that foreign policy and domestic policy are interacting and conceptually blurring far more than we are used to, in large part because immigration means that more of the UK public identify and connect with countries of origin/ancestry:
“With the advent of satellite television, cheap air fares and the 24/7 global media, even poor and deprived communities are able to maintain very close relationships with their home countries in ways that were not possible even a decade ago. This makes questions of loyalty and identity increasingly complex and means that influence and power can lie far from ‘home’ and beyond the control of national politicians in the UK.”
They also take pains to distinguish, within British Muslims, between angry political radicalism (quite common) and violent extremism of the al-Qaeda sort (very rare). This is an important and sometimes neglected distinction.
These two excellent points, though, undermine their argument for their central thesis (apparently shared by Neville-Jones), which is that issues of social cohesion, poverty and discrimination as experienced by Muslims in the UK are key to motivating terrorism.
First of all, their distinction between angry radicalism and violent extremism weakens their case for the link between local conditions and extremism.
Deprivation and discrimination in day-to-day life are certainly involved in motivating non-terrorist radicalism – think of the northern ‘race riots’ in the summer of 2001. Back in those days the media talked of ‘Asians’ rather than ‘Muslims’, and foreign policy as an issue of identity-based grievance was barely on the radar. Rioting is obviously violence rather than legitimate protest, but it is comprehensible as a more extreme version of the ordinary politics of the dispossessed; it’s more intelligible as the occasional extremity of radicalism than as some sort of ‘community terrorism’.
But if, as the Demos writers correctly note, most angry, radical Muslims (whether they stop at placards and chants or allow their protests to get rough) are not drawn into the extreme of terrorism. And indeed, those very few that have gone that way (think of Mohammed Siddique Khan) have not, relative to British Muslims overall, been particularly poor, ill-educated or separated from non-Muslim society.
Secondly, the writers (again correctly) observe the increasingly quick and easy exchange of news and opinions between Muslims in the UK and those in families abroad. But this undermines the importance they attach to the connection between awareness of injustice across the umma and of local Muslims experiencing poverty and discrimination. If the identity that terrorist recruiters play on is a global Islamic one, then injustice anywhere will do. And the easier it is to get information from across the world – from Pakistan to Gaza – the less motivationally important local conditions become.
Social cohesion policy is largely beside the point as counterterrorism. I’m going to do something out of character now (and which may earn me a kneecapping from the fiskers) and approvingly quote Madeleine Bunting on this:
“It is crucial to delink terrorism from the integration and diversity agenda. They have nothing to do with each other, so nail the myth… that integration is an anti-terrorism strategy. The least integrated are isolated, non-English-speaking mothers and grandmothers - hardly bomb-making material. Conversely, integration measured in education, employment or social life is no immunisation from the appeal of Islamist extremism - as the CVs of last year's London bombers showed.
“So go back to basics and reiterate that integration is about equality of opportunity, breaking down intergenerational cycles of poverty, and harmonious social relations. These goals may - or may not, depending on international affairs - reduce the appeal of terrorism in the long run, but any serious government should be interested in them in their own right, not simply as a means to the end of defeating terrorism.”
There’s also a very practical point to add to this: policies to promote integration could be futile or even counterproductive if they appear to be motivated by a fear that Muslims are potential terrorists. That would look cynically self-interested as well as deeply (and inaccurately) insulting. In this respect, the Demos report is rather unhelpfully subtitled ‘Community-based approaches to counter-terrorism’.
But, all that said, the broad point that foreign and domestic security policy has to be done differently these days is sound. I watched the excellent Thirteen Days last night, about the Cuban missile crisis, with one government carefully calculating its moves towards another. The world’s much less like that now, and not just because of changes to the balance of state power. There are increasing numbers and types of non-state actors on the world stage, and states’ identities become more problematic as their populations change more quickly than their institutions. New realities mean that old-fashioned realpolitik won’t work.
Immigration has made it harder for governments to use raw patriotism to rally support for self-interested foreign policy, and it has contributed to a complex system of transnational relations operating in parallel with the state system. This muddies the water of international relations (as well as the distinction between domestic and foreign), perhaps to the point where there is more mud than water.
Rational calculation based on an assessment of the opponent’s intentions becomes exponentially harder as the players become more diverse, as the boundaries blur, and as the information available becomes less and less adequate. National governments are still the biggest players, by a long way, but they are losing their ability to set – or even fully understand – the rules of the game.
Tuesday, December 19, 2006
‘Vote Blair, get Brown’ (or ‘Opportunism Knocks’)
‘Vote Blair, get Brown’ was the Tory slogan at the last election. It was quite a popular slogan, although not for the Tories. Personally, my attitude was ‘vote Labour, get Labour’, which seemed to work quite well as neither chap had his name on my ballot paper.
But, as David Cameron and his party made the probable succession very explicit (and we’ve all more or less known the likely successor for years), he is really in no position now to say that “it would be right actually to hold a general election as soon as is reasonably possible, because the British people thought they were electing Tony Blair. He’s off. Someone new is coming. They need a mandate.”
His attitude is especially contemptible because in March 2005, Cameron himself said: “The fact is if you vote Labour you get Blair, you get Brown, you get extra spending, extra taxing, extra wasting, extra bureaucracy, more power to Brussels, more regional government - all the things that people don't want. So it doesn't matter whether you have Blair or Brown or Milburn, or whoever.” [my emphasis]
I have no idea whether a snap(ish) election would be better for Brown’s electoral chances than waiting out a full term, but one utterly irrelevant factor is any supposed lack of electoral legitimacy, because however presidential Blair’s style may seem, the fact is that British government is party-based; Parliament is constituency-based. Every MP in the Commons is legitimately there. The Labour majority is legitimate. Labour’s right to choose its own leader is legitimate. The leader of the majority party’s status as prime minister is legitimate.
Next!
But, as David Cameron and his party made the probable succession very explicit (and we’ve all more or less known the likely successor for years), he is really in no position now to say that “it would be right actually to hold a general election as soon as is reasonably possible, because the British people thought they were electing Tony Blair. He’s off. Someone new is coming. They need a mandate.”
His attitude is especially contemptible because in March 2005, Cameron himself said: “The fact is if you vote Labour you get Blair, you get Brown, you get extra spending, extra taxing, extra wasting, extra bureaucracy, more power to Brussels, more regional government - all the things that people don't want. So it doesn't matter whether you have Blair or Brown or Milburn, or whoever.” [my emphasis]
I have no idea whether a snap(ish) election would be better for Brown’s electoral chances than waiting out a full term, but one utterly irrelevant factor is any supposed lack of electoral legitimacy, because however presidential Blair’s style may seem, the fact is that British government is party-based; Parliament is constituency-based. Every MP in the Commons is legitimately there. The Labour majority is legitimate. Labour’s right to choose its own leader is legitimate. The leader of the majority party’s status as prime minister is legitimate.
Next!
Jon Cruddas would sit in the Cabinet as deputy
Jon Cruddas, wannabe deputy Labour leader, took part in a telephone conference with some Labour bloggers last night.
Scrybe asked for questions to put to him. Now, I don’t know much about him, but some of the stuff I’ve heard has made me worry that he might be setting himself up to be some sort of internal lefty dissident, which of course the other parties would love. So I suggested asking:
“You’ve said you don’t want to be Deputy PM nor hold a departmental portfolio. But would you want to sit in the Cabinet as some sort of minister without portfolio, so you can better connect us at the grass roots with what will (hopefully) be a more collegiate government? And would you accept collective responsibility as regards government policy or would you be willing to publicly dissent?”
Now, Scrybe seems to be having some post-champagne issues at present, but Omar, who was also involved, has posted:
“Jon said that he didn't want to be deputy prime minister because he wanted to focus on ensuring the party was involved in policy making - essentially bringing back an elected party chair. However, he indicated that he could be a non-portfolio holder in the cabinet and would therefore accept collective responsibility. He saw his potential role as intervening earlier in the policy making process to ensure that ‘things like top-up fees which were ruled out in the manifesto don't happen’.”
I’m still some way off deciding whom to support, but my worries about Cruddas have just eased.
(Update: Scrybe has now posted a full report.)
Scrybe asked for questions to put to him. Now, I don’t know much about him, but some of the stuff I’ve heard has made me worry that he might be setting himself up to be some sort of internal lefty dissident, which of course the other parties would love. So I suggested asking:
“You’ve said you don’t want to be Deputy PM nor hold a departmental portfolio. But would you want to sit in the Cabinet as some sort of minister without portfolio, so you can better connect us at the grass roots with what will (hopefully) be a more collegiate government? And would you accept collective responsibility as regards government policy or would you be willing to publicly dissent?”
Now, Scrybe seems to be having some post-champagne issues at present, but Omar, who was also involved, has posted:
“Jon said that he didn't want to be deputy prime minister because he wanted to focus on ensuring the party was involved in policy making - essentially bringing back an elected party chair. However, he indicated that he could be a non-portfolio holder in the cabinet and would therefore accept collective responsibility. He saw his potential role as intervening earlier in the policy making process to ensure that ‘things like top-up fees which were ruled out in the manifesto don't happen’.”
I’m still some way off deciding whom to support, but my worries about Cruddas have just eased.
(Update: Scrybe has now posted a full report.)
Monday, December 18, 2006
‘Norwegian firms create oil giant’
Some headlines give you the strangest mental images.
But no, it’s not a bloodthirsty colossus made of genetically engineered sentient petrochemicals, rampaging through Scandinavia… ”and only one man stands between Oslo and destruction…” It’s just a big company.
Yawn.
But no, it’s not a bloodthirsty colossus made of genetically engineered sentient petrochemicals, rampaging through Scandinavia… ”and only one man stands between Oslo and destruction…” It’s just a big company.
Yawn.
Sunday, December 17, 2006
Here and there
If anyone other than Matt, Alex and me was following this discussion of god, atheism, morality etc, then you might like to know it’s now passed on to a better place chez Alex.
Friday, December 15, 2006
Dianaspiracy
The truth finally becomes clear.
We can now conclude, with absolute certainty, that Diana was killed by New Labour assassins disguised as paparazzi. You just have to look at the main things that have happened as a result of her death, and ask: cui bono?
She very clearly died so that Tony Blair could: (a) capture the public’s hearts with his command of empathic, Zeitgeist-grabbing soundbites; (b) boost the Oscar chances of Labour supporter Helen Mirren, thus helping an increasingly tired government retain some celeb sparkle; and (c) use the publication of a report on Diana’s death as media cover for his police interview about loans-for-peerages.
He is even more devious than we had imagined.
And when David Cameron gatecrashes the memorial concert planned for next year to showcase his man-of-the-people act, as he surely will (“She would have wanted me to surgically attach this wind turbine to my arse, so that I could generate green energy every time I talked out of it; truly, I am the people’s ponce”), then we’ll know who is the true heir to Blair…
(‘David Cameron’ is an anagram of ‘A damn Di cover’. What more proof do you people need?!?!?)
We can now conclude, with absolute certainty, that Diana was killed by New Labour assassins disguised as paparazzi. You just have to look at the main things that have happened as a result of her death, and ask: cui bono?
She very clearly died so that Tony Blair could: (a) capture the public’s hearts with his command of empathic, Zeitgeist-grabbing soundbites; (b) boost the Oscar chances of Labour supporter Helen Mirren, thus helping an increasingly tired government retain some celeb sparkle; and (c) use the publication of a report on Diana’s death as media cover for his police interview about loans-for-peerages.
He is even more devious than we had imagined.
And when David Cameron gatecrashes the memorial concert planned for next year to showcase his man-of-the-people act, as he surely will (“She would have wanted me to surgically attach this wind turbine to my arse, so that I could generate green energy every time I talked out of it; truly, I am the people’s ponce”), then we’ll know who is the true heir to Blair…
(‘David Cameron’ is an anagram of ‘A damn Di cover’. What more proof do you people need?!?!?)
Wednesday, December 13, 2006
Funding caps and the Labour-union link
If this is true, about the plans of Sir Hayden Phillips, chair of the independent party funding review, then I’m in favour:
“Under Sir Hayden's proposals each of the 3.5 million trade unionists paying the party levy will find their name and address passed to the Labour party where they will be registered as an individual donor; every year the party will have to write to them asking if they wish to remain a donor. A similar registration system will exist for constituency Labour parties where unions give £6 for each 100 members, effectively restricting union influence in these parties where the four unions Amicus, GMB, TGWU and Unison, play a big role. Each union member would be able to personally donate up to £50,000 a year to the party.”
(Although I’m not sure about the requirement to ask for renewal each year – nobody asks me if I want to renew my party membership annually, and that works fine.)
But Luke Akehurst is up in arms:
“If… Blair wants to use the review of party funding to sever the union link with Labour, I am aghast at the short-termism and stupidity. …
“The union link works. It gives a voice in Labour's policy making to millions of ordinary working class voters whose concerns are grounded in the realities and bread and butter issues of the workplace and who counterbalance the esoteric and sometimes extremist views of often middle class individual party members. It means that Labour's leaders are elected by a large, representative sample of those who actually vote for the Party. The only problem with the link is that it needs strengthening at a local level with far more trade unionists being encouraged to both join the Party as individual members and become union delegates to their constituency parties.”
If I’m reading him right, Luke’s concern is for the existence of the union link rather than the funding reform proposals as such. I would hazard a guess that the Guardian report that got his goat was written in line with some idiotic ‘bash-the-unions’ spin from someone in No. 10.
The Daily is, if anything, even angrier: “The bottom line is that if we end the union link - which these proposals would do - then the party is over.”
I don’t think the funding proposals themselves would force the link to be broken; how Labour manages the changes would be a matter for us to decide. And I have no doubt we’d decide not to scrap the link, whatever a few clinically Blairite apparatchiks might think.
If, as I argued a while ago, the affiliation fees that Labour gets from the unions are given in a way that’s clearly individualised rather than flowing through some centrally run union fund, then that helps the argument that these large sums are acceptable in light of a cap because they’re just aggregates of small individual donations rather than just the result of a union baron with a massive chequebook.
Each union’s voting strength within Labour could still easily enough be allocated based on the number of its members who choose to affiliate and make payments under the union’s umbrella. The voting decisions of union conference delegates could perfectly well be made collectively.
As Luke says, the union link would benefit from becoming more localised and giving individual union members a stronger connection with the party. The new funding arrangement proposed here might even help with that.
I’m very concerned that a big chunk of the Labour party is going to accept the (stupid and pointless) invitation to pick a fight with the leadership over this, giving the overwhelming impression that we’re engaged in special pleading: “Oh no, don’t put a cap on our funding! We’re different! This is the way we’ve always done it!” Which the Tories and Lib Dems will love.
Yes, large union affiliations are different from large corporate donations: they are democratic, the result of large numbers of individual decisions. Why, then, the fear about making that defining characteristic more overt?
“Under Sir Hayden's proposals each of the 3.5 million trade unionists paying the party levy will find their name and address passed to the Labour party where they will be registered as an individual donor; every year the party will have to write to them asking if they wish to remain a donor. A similar registration system will exist for constituency Labour parties where unions give £6 for each 100 members, effectively restricting union influence in these parties where the four unions Amicus, GMB, TGWU and Unison, play a big role. Each union member would be able to personally donate up to £50,000 a year to the party.”
(Although I’m not sure about the requirement to ask for renewal each year – nobody asks me if I want to renew my party membership annually, and that works fine.)
But Luke Akehurst is up in arms:
“If… Blair wants to use the review of party funding to sever the union link with Labour, I am aghast at the short-termism and stupidity. …
“The union link works. It gives a voice in Labour's policy making to millions of ordinary working class voters whose concerns are grounded in the realities and bread and butter issues of the workplace and who counterbalance the esoteric and sometimes extremist views of often middle class individual party members. It means that Labour's leaders are elected by a large, representative sample of those who actually vote for the Party. The only problem with the link is that it needs strengthening at a local level with far more trade unionists being encouraged to both join the Party as individual members and become union delegates to their constituency parties.”
If I’m reading him right, Luke’s concern is for the existence of the union link rather than the funding reform proposals as such. I would hazard a guess that the Guardian report that got his goat was written in line with some idiotic ‘bash-the-unions’ spin from someone in No. 10.
The Daily is, if anything, even angrier: “The bottom line is that if we end the union link - which these proposals would do - then the party is over.”
I don’t think the funding proposals themselves would force the link to be broken; how Labour manages the changes would be a matter for us to decide. And I have no doubt we’d decide not to scrap the link, whatever a few clinically Blairite apparatchiks might think.
If, as I argued a while ago, the affiliation fees that Labour gets from the unions are given in a way that’s clearly individualised rather than flowing through some centrally run union fund, then that helps the argument that these large sums are acceptable in light of a cap because they’re just aggregates of small individual donations rather than just the result of a union baron with a massive chequebook.
Each union’s voting strength within Labour could still easily enough be allocated based on the number of its members who choose to affiliate and make payments under the union’s umbrella. The voting decisions of union conference delegates could perfectly well be made collectively.
As Luke says, the union link would benefit from becoming more localised and giving individual union members a stronger connection with the party. The new funding arrangement proposed here might even help with that.
I’m very concerned that a big chunk of the Labour party is going to accept the (stupid and pointless) invitation to pick a fight with the leadership over this, giving the overwhelming impression that we’re engaged in special pleading: “Oh no, don’t put a cap on our funding! We’re different! This is the way we’ve always done it!” Which the Tories and Lib Dems will love.
Yes, large union affiliations are different from large corporate donations: they are democratic, the result of large numbers of individual decisions. Why, then, the fear about making that defining characteristic more overt?
Responsibility
Eve Garrard has a good post at Normblog on the subject of blame. She considers, and rejects, the view that the individual agent is always entirely responsible for actions (e.g. terrorist bombings):
“…we don't have to buy into all the anodyne exculpations of some root-causes talk in order to accept that sometimes circumstances make a difference to the degree of responsibility a person carries for her misdeeds. We think that the person who has been hideously abused or oppressed, or appallingly impoverished or indoctrinated, doesn't carry the same weight of responsibility for at least some kinds of wrongdoing”.
She also rejects the view that we can blame the initiator of some chain of effects for all the bad consequences that result, whoever carries these out:
“If we view the initiators of a war as the only blameworthy agents in the context of that war, then not only do we deny responsibility to all the other participants (thus reducing them to moral puppets), we also place a limitless burden of responsibility on the initiators. Indeed this principle gives carte blanche to all the other parties to a military conflict to behave as atrociously as they please, since all the blame will be attributable to the initiators.”
But on top of this, she argues that a third view – under which responsibility can be shared among anyone who has (foreseeably) played a part in producing some negative outcome:
“A great many people will make some causal contribution or other to the atrocities, since causal chains ramify so fast; and there seems no reason to exclude any of them from sharing in the responsibility, once we allow that it can be shared by more than the direct agents, and especially once we allow that it can be shared by people (such as the initiators of the war) who may have neither wanted nor aimed at the horrific outcomes. We'll rapidly get to the stage where it's easy to say that we're all responsible for the horrors, which is of course tantamount to saying that no-one is really responsible.”
All of which sounds fair enough. So what do we do? She suggests that we might “distinguish between those who are primarily responsible, and those who bear some real, but lesser, secondary, responsibility”. Those directly and intentionally carrying out the action will generally bear primary responsibility, and those who have contributed in some way to its happening will be liable to secondary responsibility.
For want of an elegant segue: here are some of my thoughts.
One way of taking on board the insight that contextual factors matter, while maintaining that there is a unique type of responsibility for what one directly does, is to stipulate that we are discussing actions rather than mere physical behaviour. What I’m getting at here is that actions – things we do in specific situations for particular reasons – are inherently contextual. (One could define behaviour as simple bodily motion – so throwing a grenade would be the same as throwing a ball – but I’m using a little more expansive notion that covers the causally relevant immediate surroundings. This probably needs more fleshing out.)
Different intentions and different circumstances can make different actions out of the same basic behaviour. For instance, shooting someone dead could well be an act of revenge, of racial hatred, of accidental clumsiness, of mercenary greed, of self-defence, of incitement to riot, of unthinking panic… Different levels of culpability will attach to these different actions.
The fact that we are often inclined to heap less blame on the shoulders of people who are insane, traumatised, brainwashed or coerced in some way can be covered by this idea: if the action I’m responsible for is assaulting a tourist and smashing their camera while experiencing paranoid delusions, then that is far more forgivable than similar behaviour motivated by a (sane) personal dislike.
On this view, it’s not so much that I have diminished responsibility for my behaviour as that my action (which is defined to encompass my mental state) carries with it a diminished level of blameworthiness.
Blameworthiness is determined by intention and by consequences. Both are necessary (manslaughter isn’t as bad as murder; attempted murder is bad even if no harm results).
In terms of consequences, the most relevant factor is: which consequences could the agent, at the time of acting, have reasonably been able to predict as likely? This is another notion that need more finessing, but something like this formulation seems right. If my seemingly innocuous action produces an utterly shocking disaster, then the action may well be regretted – but if I couldn’t possibly have known, then I can’t really fairly be held responsible.
But if my well-intentioned rashness produces a disaster that I could have predicted with a little forethought, that’s different. Negligence may be less reprehensible than malice, but reprehensible it surely is.
The fact that ill intent adds to culpability is obvious. This doesn’t mean, though, that good intentions always protect innocence. The issue of predictable yet unwanted side-effects is complex (see this discussion at Tom H’s for a taster), but if I expect my action to result in some good things and one bad thing, then I can only take the credit for the good as much as the blame for the bad.
Tying this back in to my suggestion that we consider actions as inherently contextual, it’s clear that important parts of the context are the agent’s intentions in acting and ability to judge likely consequences.
Relating this to some of what Eve was saying, there are cases when some third party – with intentions and motives very different from my original ones – goes on to commit some atrocity within a context that I have created. It may be that the number of different people’s actions that lie between my own action and some outcome is a good measure of the difference between ‘primary’ and ‘secondary’ (and tertiary…) responsibility, which she uses in her analysis. But the mere presence of some such intervening decisions doesn’t in itself insulate me from some responsibility for the outcome.
Say a lax parole board takes an ill-considered decision to grant early release to a serially violent criminal who shows little sign of remorse or reform. He gets out, and attacks some more people. Obviously, he committed these acts, and he’s responsible for deciding to do them. But, equally obviously, the board members have some responsibility, as they gave him the opportunity to do so.
Or say a terrorist kills some civilians in Britain, accompanied by a video containing angry statements about Iraq and Afghanistan, among other things. It’s his fault; he chose to do it. But, accepting that his motives are what he says they are, if the invasions hadn’t happened, he would probably have been less inclined to set the bomb off. To the extent that such a consequence was foreseeable, the invaders are partly responsible for his increased likelihood of doing it, as is anyone else who has done anything that predictably contributed to getting him into the position he was in.
And the issue of predictability does come in here: the more different people’s decisions are in the chain leading to the outcome, the less scope there is for the original agent to predict the outcome in question. As such, the level of responsibility decreases.
So if I do something that can reasonably be expected to increase the motive and/or opportunity of someone else to do something bad, and then they do it, then yes, it’s entirely their fault for doing it, but I also have some of the responsibility for making it likelier to be done. Given the contextuality of actions, then the other person’s action has to be viewed within the context that I predictably contributed to.
One final thought: there may be scope for drawing a distinction between responsibility and blame. With blame, there is an automatic acknowledgement of personal wrongness. Not necessarily so with responsibility: for instance, I’m responsible for the fact that I’m currently sipping coffee rather than tea. I’m the deliberate author of this state of affairs, but there’s no moral evaluative aspect to my decision.
Maybe there are grounds for applying this distinction in cases where one person plays a role in making someone else predictably likelier to do something bad, but the first person (a) doesn’t want the bad outcome to happen, (b) has made their decision as the lesser of two evils, and (c) tries (if possible) to reduce the likelihood of this outcome. In this case, we might agree that there is some responsibility for the bad outcome but that it’s reasonable to exempt their action from blame, given the context.
“…we don't have to buy into all the anodyne exculpations of some root-causes talk in order to accept that sometimes circumstances make a difference to the degree of responsibility a person carries for her misdeeds. We think that the person who has been hideously abused or oppressed, or appallingly impoverished or indoctrinated, doesn't carry the same weight of responsibility for at least some kinds of wrongdoing”.
She also rejects the view that we can blame the initiator of some chain of effects for all the bad consequences that result, whoever carries these out:
“If we view the initiators of a war as the only blameworthy agents in the context of that war, then not only do we deny responsibility to all the other participants (thus reducing them to moral puppets), we also place a limitless burden of responsibility on the initiators. Indeed this principle gives carte blanche to all the other parties to a military conflict to behave as atrociously as they please, since all the blame will be attributable to the initiators.”
But on top of this, she argues that a third view – under which responsibility can be shared among anyone who has (foreseeably) played a part in producing some negative outcome:
“A great many people will make some causal contribution or other to the atrocities, since causal chains ramify so fast; and there seems no reason to exclude any of them from sharing in the responsibility, once we allow that it can be shared by more than the direct agents, and especially once we allow that it can be shared by people (such as the initiators of the war) who may have neither wanted nor aimed at the horrific outcomes. We'll rapidly get to the stage where it's easy to say that we're all responsible for the horrors, which is of course tantamount to saying that no-one is really responsible.”
All of which sounds fair enough. So what do we do? She suggests that we might “distinguish between those who are primarily responsible, and those who bear some real, but lesser, secondary, responsibility”. Those directly and intentionally carrying out the action will generally bear primary responsibility, and those who have contributed in some way to its happening will be liable to secondary responsibility.
For want of an elegant segue: here are some of my thoughts.
One way of taking on board the insight that contextual factors matter, while maintaining that there is a unique type of responsibility for what one directly does, is to stipulate that we are discussing actions rather than mere physical behaviour. What I’m getting at here is that actions – things we do in specific situations for particular reasons – are inherently contextual. (One could define behaviour as simple bodily motion – so throwing a grenade would be the same as throwing a ball – but I’m using a little more expansive notion that covers the causally relevant immediate surroundings. This probably needs more fleshing out.)
Different intentions and different circumstances can make different actions out of the same basic behaviour. For instance, shooting someone dead could well be an act of revenge, of racial hatred, of accidental clumsiness, of mercenary greed, of self-defence, of incitement to riot, of unthinking panic… Different levels of culpability will attach to these different actions.
The fact that we are often inclined to heap less blame on the shoulders of people who are insane, traumatised, brainwashed or coerced in some way can be covered by this idea: if the action I’m responsible for is assaulting a tourist and smashing their camera while experiencing paranoid delusions, then that is far more forgivable than similar behaviour motivated by a (sane) personal dislike.
On this view, it’s not so much that I have diminished responsibility for my behaviour as that my action (which is defined to encompass my mental state) carries with it a diminished level of blameworthiness.
Blameworthiness is determined by intention and by consequences. Both are necessary (manslaughter isn’t as bad as murder; attempted murder is bad even if no harm results).
In terms of consequences, the most relevant factor is: which consequences could the agent, at the time of acting, have reasonably been able to predict as likely? This is another notion that need more finessing, but something like this formulation seems right. If my seemingly innocuous action produces an utterly shocking disaster, then the action may well be regretted – but if I couldn’t possibly have known, then I can’t really fairly be held responsible.
But if my well-intentioned rashness produces a disaster that I could have predicted with a little forethought, that’s different. Negligence may be less reprehensible than malice, but reprehensible it surely is.
The fact that ill intent adds to culpability is obvious. This doesn’t mean, though, that good intentions always protect innocence. The issue of predictable yet unwanted side-effects is complex (see this discussion at Tom H’s for a taster), but if I expect my action to result in some good things and one bad thing, then I can only take the credit for the good as much as the blame for the bad.
Tying this back in to my suggestion that we consider actions as inherently contextual, it’s clear that important parts of the context are the agent’s intentions in acting and ability to judge likely consequences.
Relating this to some of what Eve was saying, there are cases when some third party – with intentions and motives very different from my original ones – goes on to commit some atrocity within a context that I have created. It may be that the number of different people’s actions that lie between my own action and some outcome is a good measure of the difference between ‘primary’ and ‘secondary’ (and tertiary…) responsibility, which she uses in her analysis. But the mere presence of some such intervening decisions doesn’t in itself insulate me from some responsibility for the outcome.
Say a lax parole board takes an ill-considered decision to grant early release to a serially violent criminal who shows little sign of remorse or reform. He gets out, and attacks some more people. Obviously, he committed these acts, and he’s responsible for deciding to do them. But, equally obviously, the board members have some responsibility, as they gave him the opportunity to do so.
Or say a terrorist kills some civilians in Britain, accompanied by a video containing angry statements about Iraq and Afghanistan, among other things. It’s his fault; he chose to do it. But, accepting that his motives are what he says they are, if the invasions hadn’t happened, he would probably have been less inclined to set the bomb off. To the extent that such a consequence was foreseeable, the invaders are partly responsible for his increased likelihood of doing it, as is anyone else who has done anything that predictably contributed to getting him into the position he was in.
And the issue of predictability does come in here: the more different people’s decisions are in the chain leading to the outcome, the less scope there is for the original agent to predict the outcome in question. As such, the level of responsibility decreases.
So if I do something that can reasonably be expected to increase the motive and/or opportunity of someone else to do something bad, and then they do it, then yes, it’s entirely their fault for doing it, but I also have some of the responsibility for making it likelier to be done. Given the contextuality of actions, then the other person’s action has to be viewed within the context that I predictably contributed to.
One final thought: there may be scope for drawing a distinction between responsibility and blame. With blame, there is an automatic acknowledgement of personal wrongness. Not necessarily so with responsibility: for instance, I’m responsible for the fact that I’m currently sipping coffee rather than tea. I’m the deliberate author of this state of affairs, but there’s no moral evaluative aspect to my decision.
Maybe there are grounds for applying this distinction in cases where one person plays a role in making someone else predictably likelier to do something bad, but the first person (a) doesn’t want the bad outcome to happen, (b) has made their decision as the lesser of two evils, and (c) tries (if possible) to reduce the likelihood of this outcome. In this case, we might agree that there is some responsibility for the bad outcome but that it’s reasonable to exempt their action from blame, given the context.
Monday, December 11, 2006
Things I learned this weekend
That Red Leicester is excellent in a ploughman’s.
That the dark suit/red shirt/short beard combo works pretty well.
That alcohol creates blundering cretins out of sensible sophisticates, however well-turned-out.*
That ordinary plain black thread is much harder to find in the shops than is reasonable.
That sewing buttons onto shirts when hung over is a mug’s game.
That the actor who played Khan in Star Trek 2: The Wrath of Khan is not, after all, David Carradine, but Ricardo Montalban – who, on reflection, doesn’t really resemble Carradine that much.
That there’s a song by the Streets that has a very Squeeze sound to it; and, relatedtly, that if you say “Is this Squeeze?” when it comes on, the noise of it may cause people to mishear you and be impressed by your knowledge of music; but, that if you labour the point about how, even though you’d never heard the song before, you just knew it was Squeeze, then someone will eventually hear you right and you will be exposed as a prattling, self-satisfied moron who will be going home alone.
That it’s perfectly possible to abandon a pint less than a quarter of the way down, simply because you reckon you’ve already had enough, without immediate ridicule.
That if there’s one thing better than a chocolate brownie, it’s a pack of six chocolate brownies.**
* I did already know this, but it has a habit of slipping my mind at key moments.
** I’ve long suspected this to be true, but the empirical confirmation is priceless.
I hope that some of this knowledge may be used for the general betterment of humanity.
Friday, December 08, 2006
What a difference a day makes
Joel Marks explains, in the latest issue of Philosophy Now, that “in the course of a 365-day year, the earth rotates 366 times”.
As far as I can see, he’s right. All the same, my head is spinning.
As far as I can see, he’s right. All the same, my head is spinning.
Wednesday, December 06, 2006
Kicking the NHS
The Tories are up to their old tricks: “Look! Over there – a bandwagon! Fetch me my jumping shoes!”
This time it’s A&E reorganisation, and the shift towards fewer departments, but larger ones that are better furnished with specialist expertise. People are alarmed about the loss of their local A&Es. As did William Hague before him, David Cameron is exploiting the unthinking appeal of uninformed ‘common sense’. And, for once, he’s being cheered on by the Telegraph (whose love of the NHS is legendary):
”Yesterday, the Prime Minister attempted to persuade people that having to travel further for medical help in emergencies would be good for them. The apparently bizarre logic of this argument rested on the premise that highly specialised ‘super-regional’ (i.e., not local) centres of excellence are the best places to treat such life-threatening events as heart attacks and strokes. … But the common-sense objection to this plan remains: the most sophisticated treatments will be of little use to a patient who has died before he arrives at hospital. …simple logic will make it difficult for most people to reconcile enormous increases in NHS spending with what will seem to be a loss of easily accessible medical help in frightening circumstances.”
If it sounds rum to the ordinary bloke in the street (“I don’t know much about health service management policy, but I know what I don’t like!”) then it must be bad. Never mind what expert opinion says:
“We have to be upfront and tell the public that, in terms of modern medicine, some of the A&E departments that they cherish… cannot and will not be able to provide the degree of specialist services that modern medicine dictates and the public deserves. That means we have to change services so we can deliver safe, high-quality care to everyone who needs it, when they need it.
“Every service cannot be offered by every A&E department - it never has been, and never can be - so it makes sense to create networks of care with regional specialist centres to give the best possible treatment to the sickest people. … Major emergencies affect a relatively small number of people. For most people, care will continue to be as local-or indeed more local - than ever.”
Or: “Campaigns to save services currently provided in district general hospitals could lead to more than 1,000 unnecessary deaths each year, according to new analysis from the Institute for Public Policy Research”.
Opportunism is only to be expected in an opposition party. But hypocrisy at the same time takes a certain gall (or ‘total contempt for the public’ if you prefer). Cameron has made a very big thing about taking power away from centralised politicians and bureaucrats in Whitehall so that the people on the front line can run things as they see fit. But when local NHS Trusts try to reorganise themselves in a way that superficially looks worrying, he junks his supposed principles in order to exploit fear.
Cameron said last month: “The NHS matters too much to be treated like a political football.” He added: “Goooooooaaaall!!!!! One-nil! Wuu-uun-nil!”
This time it’s A&E reorganisation, and the shift towards fewer departments, but larger ones that are better furnished with specialist expertise. People are alarmed about the loss of their local A&Es. As did William Hague before him, David Cameron is exploiting the unthinking appeal of uninformed ‘common sense’. And, for once, he’s being cheered on by the Telegraph (whose love of the NHS is legendary):
”Yesterday, the Prime Minister attempted to persuade people that having to travel further for medical help in emergencies would be good for them. The apparently bizarre logic of this argument rested on the premise that highly specialised ‘super-regional’ (i.e., not local) centres of excellence are the best places to treat such life-threatening events as heart attacks and strokes. … But the common-sense objection to this plan remains: the most sophisticated treatments will be of little use to a patient who has died before he arrives at hospital. …simple logic will make it difficult for most people to reconcile enormous increases in NHS spending with what will seem to be a loss of easily accessible medical help in frightening circumstances.”
If it sounds rum to the ordinary bloke in the street (“I don’t know much about health service management policy, but I know what I don’t like!”) then it must be bad. Never mind what expert opinion says:
“We have to be upfront and tell the public that, in terms of modern medicine, some of the A&E departments that they cherish… cannot and will not be able to provide the degree of specialist services that modern medicine dictates and the public deserves. That means we have to change services so we can deliver safe, high-quality care to everyone who needs it, when they need it.
“Every service cannot be offered by every A&E department - it never has been, and never can be - so it makes sense to create networks of care with regional specialist centres to give the best possible treatment to the sickest people. … Major emergencies affect a relatively small number of people. For most people, care will continue to be as local-or indeed more local - than ever.”
Or: “Campaigns to save services currently provided in district general hospitals could lead to more than 1,000 unnecessary deaths each year, according to new analysis from the Institute for Public Policy Research”.
Opportunism is only to be expected in an opposition party. But hypocrisy at the same time takes a certain gall (or ‘total contempt for the public’ if you prefer). Cameron has made a very big thing about taking power away from centralised politicians and bureaucrats in Whitehall so that the people on the front line can run things as they see fit. But when local NHS Trusts try to reorganise themselves in a way that superficially looks worrying, he junks his supposed principles in order to exploit fear.
Cameron said last month: “The NHS matters too much to be treated like a political football.” He added: “Goooooooaaaall!!!!! One-nil! Wuu-uun-nil!”
Tuesday, December 05, 2006
Nucular deterrence
NO!!! That’s not how you pronounce it!
This word has been in common use for decades. I’ve never once seen it misspelled. But so very many people mispronounce it, and always in the same way.
Look at the word:
First, where exactly are you getting this magical extra syllable hidden between the C and the L? Second, what’s the rationale for the sound you’re putting at the end? But no, I’m getting ahead of myself. Let’s begin at the beginning.
Happily, everyone seems to get that the first bit goes ‘new’. That’s an excellent start, and we can build on that. Having got this part banked, we can look at the rest of the word:
Read that again. See if you can think of a very common word that might contain that string of letters. Perhaps a five-letter word. Got one? Excellent. Try to hold in your mind how it sounds.
Now say them together: New. Clear. Now faster: new-clear. Good.
Easy, isn’t it? So why the common mistake? All I can think is that maybe ‘nuclear’ doesn’t feel like a proper adjective in the way that, say, ‘molecular’ (of or pertaining to molecules) does. Perhaps. But what, then, what the hell is a ‘nucule’???
Nuclear: of or pertaining to the nucleus of an atom. (I could understand people wanting to pronounce it ‘new-clee-ar’, but nobody ever does.)
Phew. Sorry. Rant over. Just as long as we’re clear about that. Are youcular?
This word has been in common use for decades. I’ve never once seen it misspelled. But so very many people mispronounce it, and always in the same way.
Look at the word:
NUCLEAR
First, where exactly are you getting this magical extra syllable hidden between the C and the L? Second, what’s the rationale for the sound you’re putting at the end? But no, I’m getting ahead of myself. Let’s begin at the beginning.
Happily, everyone seems to get that the first bit goes ‘new’. That’s an excellent start, and we can build on that. Having got this part banked, we can look at the rest of the word:
CLEAR
Read that again. See if you can think of a very common word that might contain that string of letters. Perhaps a five-letter word. Got one? Excellent. Try to hold in your mind how it sounds.
Now say them together: New. Clear. Now faster: new-clear. Good.
Easy, isn’t it? So why the common mistake? All I can think is that maybe ‘nuclear’ doesn’t feel like a proper adjective in the way that, say, ‘molecular’ (of or pertaining to molecules) does. Perhaps. But what, then, what the hell is a ‘nucule’???
Nuclear: of or pertaining to the nucleus of an atom. (I could understand people wanting to pronounce it ‘new-clee-ar’, but nobody ever does.)
Phew. Sorry. Rant over. Just as long as we’re clear about that. Are youcular?
Monday, December 04, 2006
Cameron can see into future
The Tory leader is indeed a man of remarkable vision:
“A child born into poverty in 1970 was more likely to escape poverty in adulthood than a child born into poverty in 1990.”
Ten points to anyone who can tell me in what year a child born in 1990 will enter adulthood. And five bonus points if you can guess how many poverty surveys covering that year have been conducted.
(It may be that he’s taken the standard media misunderstanding of a study comparing social mobility of children born in 1958 with those born in 1970 – which I discussed under point (2) here – and then piled his own extra misunderstanding on top by getting the dates confused. But may Oliver Letwin strike me down if I am wrong.)
Interestingly, though, Cameron seems unable to see into the past:
“Does anyone think that our economy with the highest tax burden in its history is better equipped to compete now than it would be if we could lower taxes? Of course not.”
The tax ‘burden’ now (as well as forecasted levels for the next few years) is lower than during the 1980s. Even the Tory Tax Reform Commission report [PDF] admits this: “In 2007 it is forecast to rise to 42.6 per cent – the highest level since 1986.” Back in those days, mass unemployment meant mushrooming dole payments, financed through tax. Now that really was a burden, as opposed to a popular decision to spend more on improving schools and hospitals and reducing poverty.
Also, it’s noteworthy that this remark of Cameron’s blows the gaff on his whole “economic stability before tax cuts” blather. He believes, as deeply and passionately as John Redwood, that tax cuts are the route to a stronger economy. But he’s too frit to say so.
“A child born into poverty in 1970 was more likely to escape poverty in adulthood than a child born into poverty in 1990.”
Ten points to anyone who can tell me in what year a child born in 1990 will enter adulthood. And five bonus points if you can guess how many poverty surveys covering that year have been conducted.
(It may be that he’s taken the standard media misunderstanding of a study comparing social mobility of children born in 1958 with those born in 1970 – which I discussed under point (2) here – and then piled his own extra misunderstanding on top by getting the dates confused. But may Oliver Letwin strike me down if I am wrong.)
Interestingly, though, Cameron seems unable to see into the past:
“Does anyone think that our economy with the highest tax burden in its history is better equipped to compete now than it would be if we could lower taxes? Of course not.”
The tax ‘burden’ now (as well as forecasted levels for the next few years) is lower than during the 1980s. Even the Tory Tax Reform Commission report [PDF] admits this: “In 2007 it is forecast to rise to 42.6 per cent – the highest level since 1986.” Back in those days, mass unemployment meant mushrooming dole payments, financed through tax. Now that really was a burden, as opposed to a popular decision to spend more on improving schools and hospitals and reducing poverty.
Also, it’s noteworthy that this remark of Cameron’s blows the gaff on his whole “economic stability before tax cuts” blather. He believes, as deeply and passionately as John Redwood, that tax cuts are the route to a stronger economy. But he’s too frit to say so.
Poverty: Labour’s record and the Tory analysis
One of the Conservative Party’s policy groups recently published a paper [DOC] about Labour’s record on reducing relative poverty. The paper, by Greg Clark MP and Peter Franklin (C&F), omits to say that this record is much, much better than that of the last Tory Government, and also compares well internationally. (A 2005 Unicef report [PDF] said: “Until the late 1990s, the United Kingdom had one of the highest child poverty rates in the OECD. … But over the last six years, the UK government has pioneered an approach to the monitoring and reduction of child poverty that seems to be working.”)
It makes interesting reading, and it is nice to see the Tories finally talking about this area. They still have a long way to go, but if they’re really serious about it (which is debatable, given the key role that redistribution must play and David Cameron’s aim of shrinking the state relative to the private sector), then joy in heaven, etc. etc. I have, though, found a few points of contention.
C&F do not specify whether they are discussing income before housing costs have been taken into account (BHC) or after housing costs (AHC), which is a pity. But by comparing their graphs with the official statistics, you can see that they’re using BHC measures. However, it’s generally thought that for people towards the bottom of the income scale, AHC measures are a more meaningful indicator of their financial well-being (I’ll put some remarks on this in the comments box below).
They also, in judging Labour’s record, compare the years 1994/95 with 2003/04 – which is an odd choice, given that the 1996/97 figures would be a more salient starting point and that figures for 2004/05 have been available since this March.
C&F dismiss the recent fall in the child poverty rate (those living in households below 60% median income BHC) as “little”: there has been “a 2% fall in the poverty rate… barely above the… margin of error”. Strictly speaking, this is two percentage points, from 23% to 21% of children – a fall of 8.7%. But even so, this is the change between 1994/95 and 2003/04. These observations, based on this deeply spurious choice of dates, are made under the heading “Exaggerating progress since 1997” (my italics). Oh dear.
If we compare 1996/97 with 2004/05, we see a fall [PDF; table 3.1] from 25% to 19% – six percentage points or 24% BHC. (The AHC figures show a fall from 33% to 27% – six percentage points or 18%.)
These figures relate to the Government’s targets to reduce child poverty levels based on the 60% of median income threshold. C&F suggest that this has meant targeting resources to move those just below this line to just above it, in order to get a good headline change in the short term, but having far less real effect on poverty. I agree that any such threshold is arbitrary and that targeting it could in theory lead to such an outcome.
Between 1996/97 and 2004/05, the proportion of children (and people generally) in households below the 60% line has indeed fallen [PDF; tables 2.1 and 3.1]. But so have the proportions below 70% and below 50% (whether we count incomes BHC or AHC). This is a broader enrichment of poorer families than one would expect if such narrow, single-threshold targeting were taking place.
C&F do say, however, that there are more people below the 40% line than there were. This is the result of their own calculations based on figures that are not publicly available, so it’s impossible for me to evaluate this (although they clearly get good marks for research enthusiasm). But a few brief points are in order.
First, presumably they are again talking about incomes BHC rather than AHC, and both would be worth seeing. Second, they are talking about numbers of individuals rather than percentages of the population; at least some of any increase in numbers may be due to population growth. Third, it is not clear whether they have controlled for different types of household composition, which have appropriately different thresholds and whose prevalence in the population changes over time.
Fourth, C&F are again comparing 1994/95 with 2003/04. We know [PDF; tables 2.1 and 3.1] that between the earlier date and Labour’s coming to power there were increases in the poverty rates at the 50% and 60% thresholds, both BHC and AHC, both for children and for the whole population (eight separate measures). It would hardly be surprising if the rate measured at the 40% threshold had worsened in this time as well. Thus, at least some of the increase they claim as part of “Labour’s record” may be attributable to the fag-end of the Major Government.
Likewise, between C&F’s end date and the latest, 2004/05 figures, there were continued falls in the poverty rates, again on all eight of these counts (children: table 3.1; adults: table 3.2 [PDFs]), and so recent improvements at the 40% level may have been missed as a result of their choice of dates.
Now, looking more broadly than at the numbers below any one income threshold, we can see [PDF; table 6] that the poorest 20% of the population have enjoyed faster income growth (BHC and AHC) than the middle or top quintiles under Labour. By contrast [PDF; figure 2.8], real annual income growth between 1979 and 1996/97 was biased entirely in favour of the richer: the richer you were, the faster your income grew; the poorer you were, the slower. That was the Tory record of increasing relative poverty.
But there is a further question as to whether the changes in recent years are specifically due to Labour’s policies, or despite them, or something that might have happened anyway. On this count, a simulation by the Institute for Fiscal Studies has found that changes to tax and benefits policy under Labour have reduced inequality by redistributing in the poor’s favour, relative to what would have happened had the system stayed the same. And calculations [PDF] by John Hills, Director of the LSE’s Centre for Analysis of Social Exclusion, show that:
“Comparing the 2004-05 tax and benefit system with the 1997 system adjusted for price inflation, the poorest tenth are on average 24 per cent better off than they would have been, and the top tenth slightly worse off. Against an alternative comparator of the 1997 system indexed by earnings growth but without reform, the structural changes of the last seven years are more clearly redistributive: the bottom tenth is 11 per cent better off than it would have been with this alternative, but the top four tenths are worse off.”
Cameron recently said that poverty is “not just a lack of money”. He seemed to think that this was news. But the Government doesn’t just focus on income levels, and the number below some given headline figure. Its annual Opportunity For All report looks at 59 different indicators of deprivation, covering areas such as teenage pregnancies, educational achievement, housing quality, employment among disadvantaged groups, rough sleeping, drug use, crime and fear of crime, and road accident casualties in deprived areas. Since 1997, 40 of these show improvement and seven a worsening.
There is a real and persistent (but hopefully not intractable) problem about the bottom 5% or so of the income scale: they have experienced less improvement than the 5% above them, or the 10% above that. The progress on poverty so far has been good, but bolder and smarter policies will be needed for these people to benefit more.
Looking at the Government’s aim of abolishing child poverty by 2020, a recent report by child poverty expert Lisa Harker for the Department for Work and Pensions argued that “a combination of a higher employment rate and enhanced benefit/tax credit support will be necessary”, and that we must “break the link between disadvantage in early childhood and poor life chances”.
Similarly, a Joseph Rowntree Foundation report by Donald Hirsh concluded that “even though substantial increases [in tax credits and benefits] will undoubtedly be needed, they will have to be combined with other measures. Only by improving the opportunities of tomorrow’s parents to provide for themselves, in particular by improving educational outcomes for today’s disadvantaged young people, is there a chance that this bold mission will succeed.”
This mammoth task demands a Government with the enduring passion, and the breadth and depth of thinking, to take it on. On current form, I can’t see that the Tories would be anything other than a step backwards.
It makes interesting reading, and it is nice to see the Tories finally talking about this area. They still have a long way to go, but if they’re really serious about it (which is debatable, given the key role that redistribution must play and David Cameron’s aim of shrinking the state relative to the private sector), then joy in heaven, etc. etc. I have, though, found a few points of contention.
C&F do not specify whether they are discussing income before housing costs have been taken into account (BHC) or after housing costs (AHC), which is a pity. But by comparing their graphs with the official statistics, you can see that they’re using BHC measures. However, it’s generally thought that for people towards the bottom of the income scale, AHC measures are a more meaningful indicator of their financial well-being (I’ll put some remarks on this in the comments box below).
They also, in judging Labour’s record, compare the years 1994/95 with 2003/04 – which is an odd choice, given that the 1996/97 figures would be a more salient starting point and that figures for 2004/05 have been available since this March.
C&F dismiss the recent fall in the child poverty rate (those living in households below 60% median income BHC) as “little”: there has been “a 2% fall in the poverty rate… barely above the… margin of error”. Strictly speaking, this is two percentage points, from 23% to 21% of children – a fall of 8.7%. But even so, this is the change between 1994/95 and 2003/04. These observations, based on this deeply spurious choice of dates, are made under the heading “Exaggerating progress since 1997” (my italics). Oh dear.
If we compare 1996/97 with 2004/05, we see a fall [PDF; table 3.1] from 25% to 19% – six percentage points or 24% BHC. (The AHC figures show a fall from 33% to 27% – six percentage points or 18%.)
These figures relate to the Government’s targets to reduce child poverty levels based on the 60% of median income threshold. C&F suggest that this has meant targeting resources to move those just below this line to just above it, in order to get a good headline change in the short term, but having far less real effect on poverty. I agree that any such threshold is arbitrary and that targeting it could in theory lead to such an outcome.
Between 1996/97 and 2004/05, the proportion of children (and people generally) in households below the 60% line has indeed fallen [PDF; tables 2.1 and 3.1]. But so have the proportions below 70% and below 50% (whether we count incomes BHC or AHC). This is a broader enrichment of poorer families than one would expect if such narrow, single-threshold targeting were taking place.
C&F do say, however, that there are more people below the 40% line than there were. This is the result of their own calculations based on figures that are not publicly available, so it’s impossible for me to evaluate this (although they clearly get good marks for research enthusiasm). But a few brief points are in order.
First, presumably they are again talking about incomes BHC rather than AHC, and both would be worth seeing. Second, they are talking about numbers of individuals rather than percentages of the population; at least some of any increase in numbers may be due to population growth. Third, it is not clear whether they have controlled for different types of household composition, which have appropriately different thresholds and whose prevalence in the population changes over time.
Fourth, C&F are again comparing 1994/95 with 2003/04. We know [PDF; tables 2.1 and 3.1] that between the earlier date and Labour’s coming to power there were increases in the poverty rates at the 50% and 60% thresholds, both BHC and AHC, both for children and for the whole population (eight separate measures). It would hardly be surprising if the rate measured at the 40% threshold had worsened in this time as well. Thus, at least some of the increase they claim as part of “Labour’s record” may be attributable to the fag-end of the Major Government.
Likewise, between C&F’s end date and the latest, 2004/05 figures, there were continued falls in the poverty rates, again on all eight of these counts (children: table 3.1; adults: table 3.2 [PDFs]), and so recent improvements at the 40% level may have been missed as a result of their choice of dates.
Now, looking more broadly than at the numbers below any one income threshold, we can see [PDF; table 6] that the poorest 20% of the population have enjoyed faster income growth (BHC and AHC) than the middle or top quintiles under Labour. By contrast [PDF; figure 2.8], real annual income growth between 1979 and 1996/97 was biased entirely in favour of the richer: the richer you were, the faster your income grew; the poorer you were, the slower. That was the Tory record of increasing relative poverty.
But there is a further question as to whether the changes in recent years are specifically due to Labour’s policies, or despite them, or something that might have happened anyway. On this count, a simulation by the Institute for Fiscal Studies has found that changes to tax and benefits policy under Labour have reduced inequality by redistributing in the poor’s favour, relative to what would have happened had the system stayed the same. And calculations [PDF] by John Hills, Director of the LSE’s Centre for Analysis of Social Exclusion, show that:
“Comparing the 2004-05 tax and benefit system with the 1997 system adjusted for price inflation, the poorest tenth are on average 24 per cent better off than they would have been, and the top tenth slightly worse off. Against an alternative comparator of the 1997 system indexed by earnings growth but without reform, the structural changes of the last seven years are more clearly redistributive: the bottom tenth is 11 per cent better off than it would have been with this alternative, but the top four tenths are worse off.”
Cameron recently said that poverty is “not just a lack of money”. He seemed to think that this was news. But the Government doesn’t just focus on income levels, and the number below some given headline figure. Its annual Opportunity For All report looks at 59 different indicators of deprivation, covering areas such as teenage pregnancies, educational achievement, housing quality, employment among disadvantaged groups, rough sleeping, drug use, crime and fear of crime, and road accident casualties in deprived areas. Since 1997, 40 of these show improvement and seven a worsening.
There is a real and persistent (but hopefully not intractable) problem about the bottom 5% or so of the income scale: they have experienced less improvement than the 5% above them, or the 10% above that. The progress on poverty so far has been good, but bolder and smarter policies will be needed for these people to benefit more.
Looking at the Government’s aim of abolishing child poverty by 2020, a recent report by child poverty expert Lisa Harker for the Department for Work and Pensions argued that “a combination of a higher employment rate and enhanced benefit/tax credit support will be necessary”, and that we must “break the link between disadvantage in early childhood and poor life chances”.
Similarly, a Joseph Rowntree Foundation report by Donald Hirsh concluded that “even though substantial increases [in tax credits and benefits] will undoubtedly be needed, they will have to be combined with other measures. Only by improving the opportunities of tomorrow’s parents to provide for themselves, in particular by improving educational outcomes for today’s disadvantaged young people, is there a chance that this bold mission will succeed.”
This mammoth task demands a Government with the enduring passion, and the breadth and depth of thinking, to take it on. On current form, I can’t see that the Tories would be anything other than a step backwards.
Saturday, December 02, 2006
A rule of law unto itself
Lord Bingham recently gave an elegant and scholarly speech on the rule of law (hat tip to Martin Kettle). I wouldn’t normally pick a legal argument with a senior law lord, but the lecture was on a broad matter of principle, and arguably more political than legal. Bingham addressed the subject of the rule of law because the principle has recently been enshrined in statute:
“The Constitutional Reform Act 2005 provides… that the Act does not adversely affect ‘the existing constitutional principle of the rule of law’ or ‘the Lord Chancellor's existing constitutional role in relation to that principle’. This provision… is further reflected in the oath to be taken by Lord Chancellors… to respect the rule of law and defend the independence of the judiciary. But the Act does not define the existing constitutional principle of the rule of law, or the Lord Chancellor's existing constitutional role in relation to it.”
So we have a statutory commitment to a principle that is undefined. Tricky. Perhaps, though, this is just idle rhetoric of no concern – the phrase ‘rule of law’ is often used as vague shorthand for whatever the speaker values within a political system. But Bingham thinks not: “the statutory affirmation of the rule of law as an existing constitutional principle” means that “judges… are not free to dismiss the rule of law as meaningless verbiage, the jurisprudential equivalent of motherhood and apple pie, even if they were inclined to do so. They would be bound to construe a statute so that it did not infringe an existing constitutional principle, if it were reasonably possible to do so.”
So he thinks it worth attempting to define the rule of law: “all persons and authorities within the state, whether public or private, should be bound by and entitled to the benefit of laws publicly and prospectively promulgated and publicly administered in the courts.” There would doubtless have to be certain exemptions and qualifications – “But it seems to me that any derogation calls for close consideration and clear justification.”
Hear, hear. Bingham fleshes this out a little by giving eight sub-rules that he thinks the principle entails:
“[1] the law must be accessible and so far as possible intelligible, clear and predictable … [2] questions of legal right and liability should ordinarily be resolved by application of the law and not the exercise of discretion … [3] the laws of the land should apply equally to all, save to the extent that objective differences justify differentiation … [4] the law must afford adequate protection of fundamental human rights … [5] means must be provided for resolving, without prohibitive cost or inordinate delay, bona fide civil disputes which the parties themselves are unable to resolve … [6] ministers and public officers at all levels must exercise the powers conferred on them reasonably, in good faith, for the purpose for which the powers were conferred and without exceeding the limits of such powers … [7] adjudicative procedures provided by the state should be fair … [8] compliance by the state with its obligations in international law”.
I’d tend to agree that these are good rules (with a fair few caveats, especially on the last one). But I’m not sure these eight suffice for the rule of law as initially defined, nor that all eight are aspects of the rule of law as such. To his credit, Bingham acknowledges in particular that there is controversy over point (4): protection of fundamental rights.
To be sure, we can all agree that this is a very good thing. But imagine a situation in which laws are harsh, punishments are brutal, freedoms are denied – and yet the state system that maintains these conditions could well function most effectively and in full accordance with its own (unjust) laws. Rule of law, but terrible human rights abuses?
Bingham appreciates the hypothesis, but rejects the conclusion:
“A state which savagely repressed or persecuted sections of its people could not in my view be regarded as observing the rule of law, even if the transport of the persecuted minority to the concentration camp or the compulsory exposure of female children on the mountainside were the subject of detailed laws duly enacted and scrupulously observed. So to hold would, I think, be to strip the existing constitutional principle [of the rule of law]… of much of its virtue”.
I think this is quite wrong. The flaw in his reasoning is to believe that if such systemic repression were compatible with the rule of law, then the rule of law would be a poor principle indeed (and, as it is a fine principle, it must therefore entail protecting fundamental rights). This mistakenly assumes that the rule of law is the only principle needed for a just society. It is not.
There is a real, huge difference between a state in which a repressive legal system operates smoothly (and cruelly), and one in which such a system can often be thwarted by bribery or its own maladministration. Likewise between a liberal democracy with open and just laws that are impartially enforced with rigorous checks and balances, and one in which the high principles of the statute book are often secretly mistranslated into nepotistic judgments by the courts. Both of these comparisons represent strong vs weak rule of law; and the difference between the first pair and the second is at a political level distinct from this principle. To think that the rule of law is everything good is to put all your ideological eggs in one basket.
Or take Iraq: while its government and constitution today are vastly superior in terms of respect for human rights than four years ago, its ability to enforce its will is far inferior. The laws may be better but the rule is tragically lacking.
(And this highlights another aspect of the rule of law that Bingham doesn’t mention: the relevant authorities must actually have the practical ability to enforce laws. This is, admittedly, not a legal aspect and so he might be excused for not discussing it, but surely the phrase ‘the rule of law’ is very strongly suggestive of the fact that the debate has been expanded from the law itself to encompass factors surrounding it that enable it to have force.)
Is this hair-splitting? I do, after all, agree that human rights are essential, whether or not we strictly class them as part of some other principle; indeed, we have other legislation such as the Human Rights Act to protect them, so perhaps it doesn’t really matter whether we interpret them as part of ‘the rule of law’ under this Constitutional Reform Act as well.
But the fact that there is such a point of (apparently technical) legitimate contention serves to illustrate a deeper problem with Bingham’s argument.
Explaining his sub-rule (1), he argues that clear and predictable law precludes “excessive innovation and adventurism by the judges. It is one thing to alter the law's direction of travel by a few degrees, quite another to set it off in a different direction. The one is probably foreseeable and predictable, something a prudent person would allow for, the other not.” And regarding (2), against individual discretion: “The broader and more loosely-textured a discretion is, whether conferred on an official or a judge, the greater the scope for subjectivity and hence for arbitrariness, which is the antithesis of the rule of law. …a discretion should ordinarily be narrowly defined and its exercise capable of reasoned justification.”
I find these points to be utterly right – and utterly integral to the rule of law. Judges and other officials cannot alter or disregard laws as they wish – unless the law itself defines room for judicial manoeuvre under certain circumstances where it can be justified. There must be scope for interpretation, but not to the extent that the nature of a constitutional principle becomes a matter of a judge’s subjective opinion. The rule of law places structural reliability over individual discretion and adventurism.
And this is where the paradox unfolds. If Bingham is right here (and I’m certain that any concept of the rule of law implying otherwise would be a travesty), then our judges cannot go around pronouncing their own views of terms that appear in Acts of Parliament in order that these views should acquire legal force. Fine in a lecture, perhaps, but not in a ruling.
If the meaning of a term cannot be determined at all by reference to the statute book, then clauses that invoke it cannot be applied without the judiciary’s violating this principle, and creating a legal requirement where before there was only clumsy hand-waving. Any plausible judicial fleshing-out of ‘the rule of law’ in this context would thus be an activity that proscribed itself – and in manifesting subjective adventurism in this way, it could set a precedent for future judges, who may not be so decent in their opinions as Bingham.
This undefined phrase that has found its way into law is but a sadly misplaced soundbite; any attempt to give it real teeth would be deeply troublesome. Best to leave it – noble but vapid – to gather dust as part of Tony Blair’s legacy.
“The Constitutional Reform Act 2005 provides… that the Act does not adversely affect ‘the existing constitutional principle of the rule of law’ or ‘the Lord Chancellor's existing constitutional role in relation to that principle’. This provision… is further reflected in the oath to be taken by Lord Chancellors… to respect the rule of law and defend the independence of the judiciary. But the Act does not define the existing constitutional principle of the rule of law, or the Lord Chancellor's existing constitutional role in relation to it.”
So we have a statutory commitment to a principle that is undefined. Tricky. Perhaps, though, this is just idle rhetoric of no concern – the phrase ‘rule of law’ is often used as vague shorthand for whatever the speaker values within a political system. But Bingham thinks not: “the statutory affirmation of the rule of law as an existing constitutional principle” means that “judges… are not free to dismiss the rule of law as meaningless verbiage, the jurisprudential equivalent of motherhood and apple pie, even if they were inclined to do so. They would be bound to construe a statute so that it did not infringe an existing constitutional principle, if it were reasonably possible to do so.”
So he thinks it worth attempting to define the rule of law: “all persons and authorities within the state, whether public or private, should be bound by and entitled to the benefit of laws publicly and prospectively promulgated and publicly administered in the courts.” There would doubtless have to be certain exemptions and qualifications – “But it seems to me that any derogation calls for close consideration and clear justification.”
Hear, hear. Bingham fleshes this out a little by giving eight sub-rules that he thinks the principle entails:
“[1] the law must be accessible and so far as possible intelligible, clear and predictable … [2] questions of legal right and liability should ordinarily be resolved by application of the law and not the exercise of discretion … [3] the laws of the land should apply equally to all, save to the extent that objective differences justify differentiation … [4] the law must afford adequate protection of fundamental human rights … [5] means must be provided for resolving, without prohibitive cost or inordinate delay, bona fide civil disputes which the parties themselves are unable to resolve … [6] ministers and public officers at all levels must exercise the powers conferred on them reasonably, in good faith, for the purpose for which the powers were conferred and without exceeding the limits of such powers … [7] adjudicative procedures provided by the state should be fair … [8] compliance by the state with its obligations in international law”.
I’d tend to agree that these are good rules (with a fair few caveats, especially on the last one). But I’m not sure these eight suffice for the rule of law as initially defined, nor that all eight are aspects of the rule of law as such. To his credit, Bingham acknowledges in particular that there is controversy over point (4): protection of fundamental rights.
To be sure, we can all agree that this is a very good thing. But imagine a situation in which laws are harsh, punishments are brutal, freedoms are denied – and yet the state system that maintains these conditions could well function most effectively and in full accordance with its own (unjust) laws. Rule of law, but terrible human rights abuses?
Bingham appreciates the hypothesis, but rejects the conclusion:
“A state which savagely repressed or persecuted sections of its people could not in my view be regarded as observing the rule of law, even if the transport of the persecuted minority to the concentration camp or the compulsory exposure of female children on the mountainside were the subject of detailed laws duly enacted and scrupulously observed. So to hold would, I think, be to strip the existing constitutional principle [of the rule of law]… of much of its virtue”.
I think this is quite wrong. The flaw in his reasoning is to believe that if such systemic repression were compatible with the rule of law, then the rule of law would be a poor principle indeed (and, as it is a fine principle, it must therefore entail protecting fundamental rights). This mistakenly assumes that the rule of law is the only principle needed for a just society. It is not.
There is a real, huge difference between a state in which a repressive legal system operates smoothly (and cruelly), and one in which such a system can often be thwarted by bribery or its own maladministration. Likewise between a liberal democracy with open and just laws that are impartially enforced with rigorous checks and balances, and one in which the high principles of the statute book are often secretly mistranslated into nepotistic judgments by the courts. Both of these comparisons represent strong vs weak rule of law; and the difference between the first pair and the second is at a political level distinct from this principle. To think that the rule of law is everything good is to put all your ideological eggs in one basket.
Or take Iraq: while its government and constitution today are vastly superior in terms of respect for human rights than four years ago, its ability to enforce its will is far inferior. The laws may be better but the rule is tragically lacking.
(And this highlights another aspect of the rule of law that Bingham doesn’t mention: the relevant authorities must actually have the practical ability to enforce laws. This is, admittedly, not a legal aspect and so he might be excused for not discussing it, but surely the phrase ‘the rule of law’ is very strongly suggestive of the fact that the debate has been expanded from the law itself to encompass factors surrounding it that enable it to have force.)
Is this hair-splitting? I do, after all, agree that human rights are essential, whether or not we strictly class them as part of some other principle; indeed, we have other legislation such as the Human Rights Act to protect them, so perhaps it doesn’t really matter whether we interpret them as part of ‘the rule of law’ under this Constitutional Reform Act as well.
But the fact that there is such a point of (apparently technical) legitimate contention serves to illustrate a deeper problem with Bingham’s argument.
Explaining his sub-rule (1), he argues that clear and predictable law precludes “excessive innovation and adventurism by the judges. It is one thing to alter the law's direction of travel by a few degrees, quite another to set it off in a different direction. The one is probably foreseeable and predictable, something a prudent person would allow for, the other not.” And regarding (2), against individual discretion: “The broader and more loosely-textured a discretion is, whether conferred on an official or a judge, the greater the scope for subjectivity and hence for arbitrariness, which is the antithesis of the rule of law. …a discretion should ordinarily be narrowly defined and its exercise capable of reasoned justification.”
I find these points to be utterly right – and utterly integral to the rule of law. Judges and other officials cannot alter or disregard laws as they wish – unless the law itself defines room for judicial manoeuvre under certain circumstances where it can be justified. There must be scope for interpretation, but not to the extent that the nature of a constitutional principle becomes a matter of a judge’s subjective opinion. The rule of law places structural reliability over individual discretion and adventurism.
And this is where the paradox unfolds. If Bingham is right here (and I’m certain that any concept of the rule of law implying otherwise would be a travesty), then our judges cannot go around pronouncing their own views of terms that appear in Acts of Parliament in order that these views should acquire legal force. Fine in a lecture, perhaps, but not in a ruling.
If the meaning of a term cannot be determined at all by reference to the statute book, then clauses that invoke it cannot be applied without the judiciary’s violating this principle, and creating a legal requirement where before there was only clumsy hand-waving. Any plausible judicial fleshing-out of ‘the rule of law’ in this context would thus be an activity that proscribed itself – and in manifesting subjective adventurism in this way, it could set a precedent for future judges, who may not be so decent in their opinions as Bingham.
This undefined phrase that has found its way into law is but a sadly misplaced soundbite; any attempt to give it real teeth would be deeply troublesome. Best to leave it – noble but vapid – to gather dust as part of Tony Blair’s legacy.
Friday, December 01, 2006
Sent down, rent up
My head is ever so gently spinning:
“The public are to be offered the chance to purchase shares in new prisons under a ‘buy to let’ scheme being considered by the Home Office, it emerged yesterday. …
“Home Office finance directors… hope that the public can be tempted to invest in a new-style property company that would build jails and then rent them out to private prison operators. This would provide a steady guaranteed dividend from the ‘rental income’.
“One incentive for small investors is that the government's punitive penal policy has seen prison numbers rise relentlessly over the past 10 years and would appear to guarantee a steady stream of rental income with no apparent shortage of prison ‘tenants’.”
Well, it’s true that it is increasingly hard to get a foot on the prison ladder these days. It’s only reasonable that we should consider co-ownership and renting.
But I have some concerns: are these private prison operators going to be subletting to any unsavoury characters? Who’s responsible for repairs, or any drug use that may take place on the premises? When my leasehold runs out after 25 years, will Strangler McGraw be forced to leave even if he still has six-and-a-half life terms still to serve? Or what about Frank the Knife, who’s serving 18 years – will he end up getting squatter’s rights and then refuse to leave?
And is there any danger of Sarah Beeny turning up with a camera crew and some sledgehammers for the lads of D wing to knock a few walls through?
Still, a safer bet than renting your property out to bloody students…
“The public are to be offered the chance to purchase shares in new prisons under a ‘buy to let’ scheme being considered by the Home Office, it emerged yesterday. …
“Home Office finance directors… hope that the public can be tempted to invest in a new-style property company that would build jails and then rent them out to private prison operators. This would provide a steady guaranteed dividend from the ‘rental income’.
“One incentive for small investors is that the government's punitive penal policy has seen prison numbers rise relentlessly over the past 10 years and would appear to guarantee a steady stream of rental income with no apparent shortage of prison ‘tenants’.”
Well, it’s true that it is increasingly hard to get a foot on the prison ladder these days. It’s only reasonable that we should consider co-ownership and renting.
But I have some concerns: are these private prison operators going to be subletting to any unsavoury characters? Who’s responsible for repairs, or any drug use that may take place on the premises? When my leasehold runs out after 25 years, will Strangler McGraw be forced to leave even if he still has six-and-a-half life terms still to serve? Or what about Frank the Knife, who’s serving 18 years – will he end up getting squatter’s rights and then refuse to leave?
And is there any danger of Sarah Beeny turning up with a camera crew and some sledgehammers for the lads of D wing to knock a few walls through?
Still, a safer bet than renting your property out to bloody students…
Subscribe to:
Posts (Atom)