Your Ad Here

Saturday, October 2, 2010

10/2 Technology: Technology blog | guardian.co.uk

     
    Technology: Technology blog | guardian.co.uk    
   
The Technology newsbucket: mobile malware, shorter Google, Yahoo sheds and more
October 1, 2010 at 10:56 AM
 

Plus open data (or not) and its attitudes, mobile development trends and more


Google has been collecting its Street View photos for Antarctica for some time. Photo by National Library NZ on The Commons on Flickr. Some rights reserved

A quick burst of 11 links for you to chew over, as picked by the Technology team

Making money with mobile malware >> Graham Cluley's blog
"Earlier this year I described the Terdial Trojan horse, which was distributed posing as a Windows mobile game called "3D Anti-terrorist action", but made calls apparently to Antarctica, Dominican Republic, Somalia and Sao Tome and Principe without the owner's permission.
So how did it make money for the hackers?
"Well, it transpires that although the Trojan did make phone calls to numbers associated with various far-flung corners of the world, the calls never made it that far."

Google URL Shortener Gets a Website >> Google Social Web blog
"There are many shorteners out there with great features, so some people may wonder whether the world really needs yet another. As we said late last year, we built goo.gl with a focus on quality. With goo.gl, every time you shorten a URL, you know it will work, it will work fast, and it will keep working." Ooh, take that, er.. whichever URL shortener gave up a while back. An API is coming, Google says.

Gallery: JPEG vs Google WebP images
"Since browsers do not currently support WebP, we used a PNG container to allow users to see these WebP images in a browser." Yes - while it's good to have a more efficient image container, it's a bit of a problem if browsers can't view it.

Yahoo Losing More Top Execs >> AllThingsD
"This entire mess–and that's precisely what it is–calls into question the tenure of Bartz, a tough-talking, cost-cutting exec who was brought in to clean up Yahoo after the maelstrom around the failed takeover attempt by Microsoft several years ago."

How The Guardian is pioneering data journalism with free tools >> Nieman Journalism Lab
We are, you know.

Payments to Suppliers over £500 (in PDF only..) >> Birmingham City Council
"At Birmingham we are committed to making our finances clear, so that everyone can see exactly how we are spending money." Which is why we put it in PDFs and redact some data which you'll need an FOI request to get at.

Local government data and the armchair auditors: are you sitting comfortably? >> Public Finance
David Walker: "A government committed to evidence might, in theory, have researched the prospect for armchair auditors and other dimensions of the Big Society before they became policies. What do we know about people's enthusiasm and capacity? Ben Page of Ipsos Mori says his surveys imply a 'seismic shift' would be necessary to get the involvement the government envisages.
"Gillian Fawcett, head of public sector at the Association of Chartered Certified Accountants, says: 'The reality is that very few members of the public currently look at local authorities' accounts – even though that opportunity is available to them. In many cases, where people are interested in accounts at all, their interest will be restricted to a specific issue, often to an area of personal interest.'"
Just remind us how the government could have researched the evidence for armchair auditors? Or where we could audit items of spending above £500 before Eric Pickles's order? Chickens and eggs come to mind.

What Platforms Will Have Mobile App Devs in 12 Mos? >> GigaOm
"Where it gets interesting is when we ask which platforms developers plan to create apps for in the future. While most will continue working on iOS, we saw over a 50 percent increase in those that said they plan to work on Android apps (from 39 percent to 61 percent) and a doubling of interest in Windows Phone apps (from 9 percent to 18 percent). BlackBerry also saw increased focus 12 months out: 19 percent up from 12 percent today. It should be noted this survey was taken before RIM's news this week." Note though that this is extremely US-centric, so Nokia (Symbian) is significantly under-represented. Flash developers for mobile phones - not that we're sure that's a big crowd - may start feeling lonely soon, though.

Distilling the W32.Stuxnet Components >> Symantec Connect
In-depth post from July analysing this intriguing piece of malware.

ACS:Law: This is what regulatory failure looks like >> TechnoLlama
"..the more harm would come from the unlawful processing, the more security there should be. ACS:Law and the ISPs are therefore in blatant breach of the Seventh Principle [of the Data Protection Act]. This is unforgivable, and the Information Commissioner should make a stand and send a clear message to other data processors. Otherwise the DPA is just reduced to a bunch of fancy words on paper."

A user's guide to websites, part 1: If it wasn't broken why fix it? >> Rev Dan Catt's Blog
Of Flickr (and GuardianRoulette) fame: "Everyone still loves feature X but hates using Perl [in which it's been re-written], it gets re-written 3 times in PHP, it still doesn't scale.
"Someone re-writes it in an afternoon in Python but it only works and scales if sub-feature "x" gets left out. 98% of users don't notice, 1.9% of users form a protest #hashtag on twitter. 0.1% of users argue about the merits of scaling in PHP vs Python vs Their Favourite Language, they write a blogpost about it (using their own blogging platform they wrote themselves in 1997) slashdot links the post and ironically declares the original site "over"."

You can follow Guardian Technology's linkbucket on delicious

To suggest links, tag articles on delicious.com with "guardiantech"


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
Local council spending over £500: full list of who has published what so far
October 1, 2010 at 9:25 AM
 

Ask every local authority in England to publish all its spending over £500 in an open format and what do you get? A whole load of PDFs. See our list of the best and the worst
Get the data

It's an open data revolution. Every one of the 326 local authorities in England has to publish every item of spending over £500 by the end of this year.

In the event, only 66 councils have put their data online so far - despite huge pressure from the DCLG, which published its own spending data yesterday. It's worth reading is reading Chris Taggart's piece on this yesterday

Birmingham … published theirs as a PDF on a confusing and messy page. However, not only is it not reusable as data without manually extracting it from the PDF file, there's none of the richness of the Trafford council data. No department names, no supplier ids, no descriptions of what the payment was for, and no classification. Comparison by category or by department is therefore impossible. They also seem to have silently redacted information, meaning that it's impossible to challenge whether a payment to supplier should have been redacted, as you'll never know it was made

The Liberal-Conservative coalition government has been pretty explicit about what it expects. First the prime minister David Cameron wrote a letter to government departments in which he told them he expected to see government to:

ensure that any data published is made available in an open format so that it can be re-used by third parties

In case there's any doubt, that means excel or CSV files or even XML. Then Eric Pickles told all local authorities in England (he has no authority over Scotland and Wales) that

I don't expect everyone to get it right first time, but I do expect everyone to do it

In September, the government published its guidance for local authorities. See the guidance here.

Councils have until January to comply but in the meantime, a number have already started to release their data. But it's not quite working out.

It should be a fantastic journalistic resource. In theory, councils will publish their data so that we can compare how they spend their money and pick up on the good and bad in public spending.

We wanted to start listing all the councils that have complied so far - and give you the links so you could check for yourself.

And what it shows is a disturbing lack of awareness among councils as to what they're doing. Of the 66 councils in England who have published so far:

• Many - 36% at last analysis - have published their spending in PDF format only, including East Herts, Broxtowe, Fareham and Hammersmith & Fulham
• Some are available in monthly, some are annual and some are quarterly - making it difficult to compare different councils. One, East Herts, publishes them weekly
• Most of them are Conservative councils
• A quarter of them are from London and the South East
• A number of councils have published their data using Spotlight on Spend, a service from Spikes Cavell which was controversial earlier this year because of a perceived lack of openness

The PDF issue is the biggest problem. While PDFs are fine for displaying documents, they are the worst possible format for any kind of analysis - publishing on PDF allows you to appear open without actually being open.

The Department for Communities and Local Government plans to publish full guidelines which will tell councils how to do this in the next few days. "The deadline is not until January," says a spokesman adding that open data formats will be expected. "We want this to be the case for all data."

In the meantime, we will monitor councils right here, adding more as they publish. If you know of any, please let us know in the comment field below. The spreadsheet is attached too, so let us know if you perform any analysis.

Data summary

Download the data

DATA: download the full spreadsheet

World government data

Search the world's government with our gateway

Can you do something with this data?

Flickr Please post your visualisations and mash-ups on our Flickr group or mail us at datastore@guardian.co.uk

Get the A-Z of data
More at the Datastore directory

Follow us on Twitter


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
Google gets into the URL shortening business - in its own quiet way
October 1, 2010 at 7:55 AM
 

No API yet, but it does have a very neat addition in the form of QR codes to take you to web pages. We road-tested it, though not on a road.

Just what the world needs: another URL shortener. Though this time it's from Google - which as Jeff Atwood (half the brains behind the wonderful Stack Overflow) points out, might actually be one of the best places to have a shortener, seeing that it must already have a vast table of lookups for URLs all over the web.

The goo.gl shortener is at present pretty basic: just a text box where you enter the URL to be shortened. If you're not signed into a Google account you always get the same shortened URL back for a given URL entry, but if you're signed in you'll get a different shortened URL particular to you back. That's like unlike, say, bit.ly which has the idea of "users" (so that you can shorten a URL that someone else has shortened, but get your own result, which means that you can see if people are clicking on your link, or other versions of the same link. [Corrected: I'd forgotten to sign in to test the Google shortening.]

Thus http://bit.ly/dvox3E, http://bit.ly/cPBpf4, http://goo.gl/B6dt and http://goo.gl/info/xziJ all go to the same place, but the bit.ly one will give you more granular statistics: compare and contrast http://bit.ly/dvox3E+ and http://bit.ly/cPBpf4+ and http://goo.gl/B6dt+, and http://goo.gl/info/xziJ+ which are the respective pages for the statistics about each shortened version. (You get the info page for goo.gl links, as with bit.ly links, by adding a + to the end of the shortened URL.)

There also isn't an API for goo.gl yet, though the company promises that it's coming.

One very neat thing that it does do: QR codes. These are the two-dimensional forms of bar codes which can contain lots more data - nearly 3K of binary data at most.

To generate a QR code using goo.gl, you simply add ".qr" to the end of the shortened link. Thus: http://goo.gl/B6dt.qr - which looks like the image at the left.

QR codes being useful to mobile phones, which can read them via their cameras. (Yes, we have heard the suggestion that we should use QR codes in the paper to link to the website. Can we consider it a little longer?)

It's worth noting that Atwood (among others) isn't a fan of shorteners - and he quotes Joshua Schachter, who notes that

"The worst problem is that shortening services add another layer of indirection to an already creaky system. A regular hyperlink implicates a browser, its DNS resolver, the publisher's DNS server, and the publisher's website. With a shortening service, you're adding something that acts like a third DNS resolver, except one that is assembled out of unvetted PHP and MySQL, without the benevolent oversight of luminaries like Dan Kaminsky and St. Postel. "

For this reason, most shorteners won't let you shorten an already-shortened link (because such double obfuscation is generally used by spammers or for malicious reasons).

But as he said in 2007,

"I often wonder why Google doesn't offer an URL redirection service, as they already keep an index of every URL in the world. The idea of Google disappearing tomorrow, or having availability problems, is far less likely than the seemingly random people and companies who operate these URL redirection services-- often for no visible income. "

Well, now it has. It will be interesting to see how much Twitter traffic (since that's the main avenue for URL shorteners) goes to it.

[Corrected to add that Google has "user shortening".]


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
How to be a data journalist
October 1, 2010 at 6:00 AM
 

Data journalism trainer and writer Paul Bradshaw explains how to get started in data journalism, from getting to the data to visualising it
Guardian data editor Simon Rogers explains how our data journalism operation works

Data journalism is huge. I don't mean 'huge' as in fashionable - although it has become that in recent months - but 'huge' as in 'incomprehensibly enormous'. It represents the convergence of a number of fields which are significant in their own right - from investigative research and statistics to design and programming. The idea of combining those skills to tell important stories is powerful - but also intimidating. Who can do all that?

The reality is that almost no one is doing all of that, but there are enough different parts of the puzzle for people to easily get involved in, and go from there. To me, those parts come down to four things:

1. Finding data

'Finding data' can involve anything from having expert knowledge and contacts to being able to use computer assisted reporting skills or, for some, specific technical skills such as MySQL or Python to gather the data for you.

2. Interrogating data

Interrogating data well means you need to have a good understanding of jargon and the wider context within which data sits, plus statistics - a familiarity with spreadsheets can help save a lot of time.

3. Visualising data

Visualising and mashing data has historically been the responsibility of designers and coders, but an increasing number of people with editorial backgrounds are trying their hand at both - partly because of a widening awareness of what is possible, and partly because of a lowering of the barriers to experimenting with them.

4. Mashing data

Tools such as ManyEyes for visualisation, and Yahoo! Pipes for mashups, have made it possible for me to get journalism students stuck in quickly with the possibilities - and many catch the data journalism bug soon after.

How to begin?

So where does a budding data journalist start? An obvious answer would be "with the data" - but there's a second answer too: "With a question".

Journalists have to balance their role in responding to events with their role as an active seeker of stories - and data is no different. The New York Times' Aron Pilhofer recommends that you "Start small, and start with something you already know and already do. And always, always, always remember that the goal here is journalism." The Guardian's Charles Arthur suggests "Find a story that will be best told through numbers", while The Times' Jonathan Richards and The Telegraph's Conrad Quilty-Harper both recommend finding your feet and coming up with ideas by following blogs in the field and attending meetups such as Hacks/Hackers.

There is no shortage of data being released that you can get your journalistic teeth into. The open data movement in the UK and internationally is seeing a continual release of newsworthy data, and it's relatively easy to find datasets being released by regulators, consumer groups, charities, scientific institutions and businesses. You can also monitor the responses to Freedom of Information requests on What Do They Know, and on organisations' own disclosure logs. And of course, there's the Guardian's own datablog.

A second approach, however, is to start with a question - "Do speed cameras cost or save money?" for example, was one topical question that was recently asked on Help Me Investigate, the crowdsourcing investigative journalism site that I run - and then to search for the data that might answer it (so far that has come from a government review and a DfT report). Submitting a Freedom of Information request is a useful avenue too (make sure you ask for the data in CSV or similar format).

Whichever approach you take, it's likely that the real work will lie in finding the further bits of information and data to fill out the picture you're trying to clarify. Government data, for example, will often come littered with jargon and codes you'll need to understand. A call to the relevant organisation can shed some light. If that's taking too long, an advanced search for one of the more obscure codes can help too - limiting your search, for example, by including site:gov.uk filetype:pdf (or equivalent limitations for your particular search) at the end.

You'll also need to contextualise the initial data with further data. Say you have some information about a government department's changing wage bill, for example: has the department workforce expanded? How does it compare to other government departments? What about wider wages within the industry? What about inflation and changes in the cost of living? This context can make a difference between missing and spotting a story.

Quite often your data will need cleaning up: look out for different names for the same thing, spelling and punctuation errors, poorly formatted fields (e.g. dates that are formatted as text), incorrectly entered data and information that is missing entirely. Tools like Freebase Gridworks can help here.

At other times the dataset you need will come in an inconvenient format, such as a PDF, Powerpoint, or a rather ugly webpage. If you're lucky, you may be able to copy and paste the data into a spreadsheet. But you won't always be lucky.

At these moments some programming knowledge comes in handy. There's a sliding scale here: at one end are those who can write scripts from scratch that scrape a webpage and store the information in a spreadsheet. Alternatively, you can use a website like Scraperwiki which already has example scripts that you can customise to your ends - and a community to help. Then there are online tools like Yahoo! Pipes and the Firefox plugin OutWit Hub. If the data is in a HTML table you can even write a one-line formula in Google Spreadsheets to pull it in. Failing all the above, you might just have to record it by hand - but whatever you do, make sure you publish your spreadsheet online and blog about it so others don't have to repeat your hard work.

Once you have the data you need to tell the story, you need to get it ready to visualise. Trim off everything peripheral to what you need in order to visualise your story. There are dozens of free online tools you can use to do this. ManyEyes and Tableau Public are good places to start for charts. This poster by A. Abela (PDF) is a good guide to what charts work best for different types of data.

Play around. If you're good with a graphics package, try making the visualisation clearer through colour and labelling. And always include a piece of text giving a link to the data and its source - because infographics tend to become separated from their original context as they make their way around the web.

For maps, the wonderful OpenHeatMap is very easy to use - as long as your data is categorised by country, local authority, constituency, region or county. Or you can use Yahoo! Pipes to map the points of interest. Both of these are actually examples of mashups, which is useful if you like the word "mashups" and want to use it at parties. There are other tools too, but if you want to get serious about mashing up, you will need to explore the world of programming and APIs. At that point you may sit back and think: "Data journalism is huge."

And you know what? I said that once.

Paul Bradshaw is founder, Help Me Investigate and Reader in Online Journalism, Birmingham City University and teaches at City University in London. He publishes the Online journalism blog


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
Is cyberwarfare a genuine threat?
September 30, 2010 at 11:28 AM
 

Suggestions that the dangers of computer warfare have been overdone don't stand up to the emerging realities

The video is a generator tearing itself apart after a cyberattack. Happily, it's a simulated one set up by the US Department of Home Security in 2007 – but it shows the sort of things that cyberwar, and in particular the Stuxnet worm, the first one known to be attacking machinery in this way, is aiming to do.

What's quite scary about the video is that (sanctioned) hackers who did it were only told the domain of the system.

The Stuxnet worm would do much the same to the generator: it interrupts the processes which monitor events, so that high-speed machinery effectively goes unmonitored and out of control.

Is that real? In 2009 Fox News (yes, we know) reported that: "The US power grid has been hacked by foreign spies … Russian and Chinese cyberspies not only got into our electrical system but left behind computer programs that could be used for future attacks." The Department for Homeland Security issued a vaguely denial-based denial – "not aware of any incidents where the grid was compromised", but it was hardly convincing: "the vulnerability is something we have known about for years". See below:

Cyberwar isn't new – Russia is believed to have used it before its invasion of Georgia to knock out websites and, perhaps, infrastructure. Napoleon famously said that an army marches on its stomach, but these days it thinks over the internet.

And in the US, Lockheed Martin has put this (rather flashy) video together about cyberwar – in which it says that one of the biggest enemies is "foreign governments".

"Economic espionage has always been a threat", explains Eric Cole, chief scientist of cyber security at Lockheed Martin. Which recalls, of course, the Titan Rain attacks against the US and UK governments in 2006/7. Cole is confident, by the way, that he's going to have work for the next 30 years in advising on how to evade these attacks.

Is Stuxnet the way forward? And if it is, what does that imply?

One cause for slight concern in all this is the fact that Siemens's SCADA system, as targeted by Stuxnet, runs on top of Windows – which offers all sorts of openings for zero-day vulnerabilities. One can't help feeling that North Korea's decision to try to develop its own operating system based on Linux was wise: not only does it save money, but it might have some resistance to attempts to infiltrate its systems via worms like this. Though if you're dealing with national spy agencies determined to infect your systems, that may be a futile hope.


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
The TechCrunch/AOL deal - immortalised in song
September 30, 2010 at 8:46 AM
 

I've had some curious conversations about AOL acquiring TechCrunch (I nearly inadvertently wrote TechCrunch acquiring AOL... perhaps file that under Arrington/wishlist) but tech blogs have been eerily devoid of deeper comment on analysis on the deal beyond backslapping and congratulations.

As Kellan tweeted: "Could TechCrunch after 5+ years writing about the biz, possibly be naive enough to believe, "Nothing will change, just more resources!"?

I expect most entrepreneurs would feel they were taking their professional life in their hands if they spoke out against TechCrunch. And while, yes yes, it is a powerhouse for the startup community as I said yesterday, many people have said that they question how healthy it is for one blog to have so much influence. Arrington is so woven into the startup scene that this deal represents success for 'one of us'. No-one wants to poop that party, especially when star struck by MC Hammer. Seriously.

Check out ilovepopula's TechCrunch AOL anthem on Soundcloud: "TechCrunch belongs to us," he sings.


Privately, those in the know are questioning whether Arrington will survive the three year tie-in he's signed. "Three years is to long," one said. "I give him a year, even with the money on the table."

Om Malik, who broke the story about the deal, last night wrote that Arrington is both a ruthless competitor and extremely loyal friend, which I think means that the only way he can cover news about TechCrunch itself is to do it 'straight as a straight thing'. That's much the same for the rest of the tech blogs.

Malik did give us a good infographic on Arrington's road to millions, as well as the nugget that the price was at least $25m, and possibly as much as $60m. The really interesting story will be finding out what Arrington does next.


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
   
Local Council Spending Data: The Good, The Bad, and The Downright Obstructive
September 30, 2010 at 8:15 AM
 

The brains behind the OpenlyLocal site assesses where we've got to with local government spending. It's a mixed bag - and some of the worst is really bad
Datablog: see who has released what


Oh no, not you again. Photo by Cristóbal Cobo Romaní on Flickr. Some rights reserved

By Chris Taggart

Now that the guidelines for the publishing of local council spending data have been published, it's a good point to take stock of how councils are actually, well, publishing the data. And the picture is none too pretty.

Out of the 66 councils (of a total of 434) publishing data (they have until January to start doing it), only 32 are publishing it in the correct format – as a comma-separated file which means it's easy to open in spreadsheets or import into database, or reuse in mashups. The rest are using a variety of tricky formats (e.g. Word, Excel files) that make it problematic at best to use the information as data, and to combine it with other data, so that it can be compared it over time, and with other authorities.

The worst offenders are those publishing it as PDFs, a document format that is ideal for printing (which was what it was designed for), and terrible for extracting data from.

I've been told privately by some staff working for those authorities that they've been instructed to use PDFs precisely because it will make reuse more difficult.

I should declare an interest here. I run OpenlyLocal, which opens up local government data, and also helped draw up the guidelines on behalf of the Local Pubic Data Panel on which I sit. We're also importing all the spending data and matching it up against companies and charities, and releasing the result as open data.

A good example of how two councils can take completely different approaches to the same thing comes with Trafford Council and Birmingham City Council. Both have published their information within the past couple of days.

Trafford published theirs as a CSV file, and using standards set out in the guidance, which means that it can be instantly compared with any other council using the same guidance (and, incidentally, published on their excellent open data page listing large amounts of data that can be reused without restriction). They are also looking at publishing previous years' spending in the same format, to make it easy to see how spending has changed over time.

Birmingham on the other hand published theirs as a PDF on a confusing and messy page. However, not only is it not reusable as data without manually extracting it from the PDF file, there's none of the richness of the Trafford council data. No department names, no supplier ids, no descriptions of what the payment was for, and no classification. Comparison by category or by department is therefore impossible. They also seem to have silently redacted information, meaning that it's impossible to challenge whether a payment to supplier should have been redacted, as you'll never know it was made.

[Charles Arthur notes: with some effort, though, it has been transformed into a spreadsheet by Paul Daniel.]

The scary thing is, however, is that Birmingham is by no means the worst., and in fact there are many councils publishing the information not only as PDFs, but as PDFs with no licence for reuse, and with very little data in it. Special mention here should go to Hammersmith & Fulham which trumpeted its publication in June of spending information for Jan-Mar, albeit as a near unusable PDF, but since then hasn't published a thing.

However the award for the council with the most useless spending data is the London Borough of Wandsworth, in south-west London. First, the information is stuck in a PDF (and for the techies out there: it's been published with headings on each page, meaning that extraction is more tricky than usual).

Second, there is no licence for reuse, meaning that the website Terms & Conditions apply, in this case "Intellectual property rights arising from this site and its contents belong to the council. Use of the contents is limited to private and non-commercial use purposes only and may not be further exploited without prior written permission of the Council."

Third, the information consists of a supplier name and an amount (presumably a total for the month). No date. No reference. No department. No category. No supplier id. No description. No classification.

Somehow, this is not what the Secretary of State had in mind, I think when he ordered councils to open their books to the public.

One ray of hope: Eric Pickles, the secretary of state, is expected to make an announcement on Friday telling councils that they must obey the guidelines. It will be interesting to see if it is retrospective - and how quickly it has to be implemented. But something really needs to change in some places.

Charles Arthur adds: one of the points of the Free Our Data campaign was that publishing data like this would create opportunities for organisations like OpenlyLocal to create businesses doing things with the data that councils couldn't or wouldn't do. Look at what's happened with the number of apps for finding Boris Bikes in London, for example: that's a commercial opportunity for app writers created entirely from making the data free. (And it has the byproduct of encouraging the use of the bikes, so everyone wins.)

When local councils try to obstruct that, it holds back the private sector - and nobody benefits, not even the councils. We'll seek an interview with Mr Pickles on this matter in the future to see whether he sees it the same way - and what action he might take.


guardian.co.uk © Guardian News & Media Limited 2010 | Use of this content is subject to our Terms & Conditions | More Feeds


   
     
 
This email was sent to g1serviceplus.rishoban@blogger.com.
Delivered by Feed My Inbox
230 Franklin Road Suite 814 Franklin, TN 37064
Create Account
Unsubscribe Here Feed My Inbox
 
     

No comments:

Post a Comment