Hi! We've renamed ScraperWiki.
The product is now QuickCode and the company is The Sensible Code Company.

Blog

ScraperWiki Classic retirement guide

tractor-on-beach

In July last year, we announced some exciting changes to the ScraperWiki platform, and our plans to retire ScraperWiki Classic later in the year.

That time has now come. If you’re a ScraperWiki Classic user, here’s what will be changing, and what it means for you:

logo-17c5c46cfc747acf837d7989b622f557Today, we’re adding a button to all ScraperWiki Classic pages, giving you single-click migration to Morph.io, a free cloud scraping site run by our awesome friends at OpenAustralia. Morph.io is very similar to the ScraperWiki Classic platform, allowing you to share the data you have scraped. If you’re an open data activist, or you work on public data projects, you should check them out!

From 12th March onwards, all scrapers on ScraperWiki Classic will be read-only, you will no longer be able to edit the code of the scrapers. You’ll still be able to migrate to Morph.io or copy the code and paste it into the “Code in your browser” tool on the new ScraperWiki. And scheduled scrapers will continue running until 17th March.

On 17th March, scheduled scrapers will stop running. We’re going to take a final copy of all public scrapers on ScraperWiki Classic, and upload them as a single repository to GitHub, in addition to the read-only archive on classic.scraperwiki.com.

Retiring ScraperWiki Classic helps us focus on our new platform and tools, and our “Code in your browser” and “Open your data” tools on our new platform are perfect for journalists and researchers starting to code, and our free 20-dataset Journalist accounts are still available. So you have no excuse not to create an account and go liberate some data! 🙂

If you have any other questions, make sure to visit our ScraperWiki Classic retirement guide for more info and FAQs.

In summary…

ScraperWiki Classic is retiring on 17th March 2014.

You can migrate to Morph.io or our new “Code in your browser” tool at any point.

We’re going to keep your public code and data available in a read-only form on classic.scraperwiki.com for as long as we’re able.

4 Responses to “ScraperWiki Classic retirement guide”

  1. Ruben Woudsma March 11, 2014 at 11:32 am #

    Great that there is a work around for the classic scraper functionality. Unfortunately it is not working as I expected. The migrated scraper is giving many errors although it was working on the classic scraperwiki. See: https://morph.io/rubenwoudsma/profscraper1314

    Please advise how troubleshooting can be done.

    • Zarino Zappia March 11, 2014 at 11:39 am #

      It looks like your scraper is currently failing with an exception of:

      AttributeError: 'str' object has no attribute 'CaptainID'
      

      It’s being caused by this line, which would be valid if you were writing PHP, but you’re not, you’re writing Python 😉

      print 'CaptainID ' . CaptainID
      

      Try replacing the period with a comma. If you come across any other issues, they’re likely to be with Morph.io rather than ScraperWiki, since your code has been successfully exported and is running on their servers. Try contacting them via their issue tracker, or their twitter account.

      • Ruben Woudsma March 13, 2014 at 11:02 am #

        Thank you for your response. I have updated the data and together with some help via their issue tracker I have solved the issue. Now only I need to get their API to work, since this looks a bit different on how to connect via jQuery Ajax what I was using for your API.

Trackbacks/Pingbacks

  1. A shout out for ScraperWiki | Andrew Wheeler - March 9, 2014

    […] previously used what is now called ScraperWiki Classic. But since the service is migrating to a new platform my prior scripts will not continue […]

We're hiring!