Hi! We've renamed ScraperWiki.
The product is now QuickCode and the company is The Sensible Code Company.


The next evolution of ScraperWiki

Quietly, over the last few months, we’ve been rebuilding both the backend and the frontend of ScraperWiki.

The new ScraperWiki has been built from the ground up to be more powerful for data scientists, and easier to use for everyone else. At its core, it’s about empowering people to take a hold of their data, to analyse it, combine it, and make value from it.



We can’t wait to let you try it in January. In the meantime, however, we’re pleased to announce that all of our corporate customers are already migrating to the new ScraperWiki for scraping, storing and visualising their private datasets.

If you want data scraping, cleaning or analysing, then you can join them. Please get in touch. We’ve got a data hub and a team of data scientists itching to help.

Tags: , ,

7 Responses to “The next evolution of ScraperWiki”

  1. Firdaus Adib December 21, 2012 at 5:26 pm #

    I like the new design. It give a metro-like feel.

    Visualization is the feature I always wanting. When this expecting to completely deploy to users?

  2. Francis Irving December 29, 2012 at 12:34 am #

    Firdaus – initial version in January. It’ll really show itself as the tools mature over the months after that.

  3. Srivats P March 5, 2013 at 2:49 pm #

    Is the new backend functional now? If not, what’s the ETA? Can we expect scrapers to run at the scheduled time rather than how it was sometimes with the old one?

    • Zarino Zappia March 5, 2013 at 3:04 pm #

      The new platform’s largely functional, and is being used by our Data Services customers and a handful of beta testers. Getting it ready for public release has taken a little longer than anticipated, but we’re keen to get it into our users’ hands ASAP. Watch this space 🙂

      Also, if you’d like to be in our next group of beta testers, and get early access (albeit with some rough edges!) let me know.

      And yes, the new platform uses Cron to schedule tasks, so code runs at an exact time, as you’d expect.


  1. A small matter of programming | ScraperWiki Data Blog - December 31, 2012

    […] We’re rebuilding ScraperWiki. […]

  2. So web scraping is easy? | ScraperWiki Data Blog - January 24, 2013

    […] That’s why we’re changing ScraperWiki. Knowing all this stuff gives you immense power and flexibility, but it’s a tall ask when you just want to quickly grab and analyse some data off the web.  By using pre-built data tools on the new ScraperWiki, you get to perform the most common tasks, quickly and easily, without having to take evening classes in Computer Science.  And then, once you hit the tool’s limitations (because eventually you always will) you can fork and tweak the code, without having to start again from scratch.  And in the process, maybe you’ll even learn something about HTML, CSS, XPath, JSON, Javascript, CSVs, Excel, PDFs, KML, SVGs… […]

  3. From future import x.scraperwiki.com | ScraperWiki Data Blog - March 19, 2013

    […] Time flies when you’re building a platform. […]

We're hiring!