Hi! We've renamed ScraperWiki.
The product is now QuickCode and the company is The Sensible Code Company.

Blog

Introducing the Search for Tweets tool

Hey – my name is Ed Cawthorne and I have recently started with ScraperWiki as the resident product manager. My first task is to let you know about the “Search for Tweets” tool on the new ScraperWiki platform. To understand how the Twitter tool came about, it is useful to understand some of the background. […]

Uploading a (structured) spreadsheet

We’ve made a new tool to help you upload a structured spreadsheet. That is to say, one that contains a table with headers. I’m trying it out with an old spreadsheet of expenses from when I worked at mySociety. If your spreadsheet isn’t consistent enough, it tells you where you’ve gone wrong. In my case, I […]

Clean Code Bookcover

Book Review: Clean Code by Robert C. Martin

Following my revelations regarding sharing code with other people I thought I’d read more about the craft of writing code in the form of Clean Code: A Handbook of Agile Software Craftmanship by Robert C. Martin. Despite the appearance of the word Agile in the title this isn’t a book explicitly about a particular methodology […]

Sharing in 6 dimensions

Hands up everyone who’s ever used Google Docs. Okay, hands down. Have you ever noticed how many different ways there are to ‘share’ a document with someone else? We have. We use Google Docs at lot internally to store and edit company documents. And we’ve always been baffled by how many steps there are to […]

We’ve migrated to EC2

When we started work on the ScraperWiki beta, we decided to host it ‘in the cloud’ using Linode, a PaaS (Platform as a Service) provider. For the uninitiated, Linode allows people to host their own virtual Linux servers without having to worry about things like maintaining their own hardware. On April 15th 2013, Linode were […]

ScraperWiki – Professional Services

How would you go about collecting, structuring and analysing 100,000 reports on Romanian companies? You could use ScraperWiki to write and host you own computer code that carries out the scraping you need, and then use our other self-service tools to clean and analyse the data. But sometimes writing your own code is not a […]

Open your data with ScraperWiki

Open data activists, start your engines. Following on from last week’s announcement about publishing open data from ScraperWiki, we’re now excited to unveil the first iteration of the “Open your data” tool, for publishing ScraperWiki datasets to any open data catalogue powered by the OKFN’s CKAN technology. Try it out on your own datasets. You’ll […]

We're hiring!