x.scraperwiki.com – ScraperWiki https://blog.scraperwiki.com Extract tables from PDFs and scrape the web Tue, 09 Aug 2016 06:10:13 +0000 en-US hourly 1 https://wordpress.org/?v=4.6 58264007 Free community accounts on the ScraperWiki Beta https://blog.scraperwiki.com/2013/05/free-community-accounts/ https://blog.scraperwiki.com/2013/05/free-community-accounts/#comments Fri, 10 May 2013 11:00:02 +0000 http://blog.scraperwiki.com/?p=758218693 Community Accounts

We’ve been teasing and tempting you with blog posts about the first few tools on the new ScraperWiki Beta for a while now. It’s time to let you try them out first-hand.

As of right now, the new ScraperWiki Beta is open for you, your aunt, anyone, to sign up for a free community account: Check out beta.scraperwiki.com.

We’re really excited. Not only does this mean all of our Classic Premium Account holders, and our new private beta applicants, have been settled into the new platform, but now regular Classic users get to try the new ScraperWiki out, for free.

The new ScraperWiki beta is a little rough around the edges, but it can already do everything ScraperWiki Classic did, and more. As we (and you!) develop and share new tools on the platform, it’s only going to get more powerful and more exciting.

The Code in your browser tool will let you copy and paste your scrapers from ScraperWiki Classic, while Search for Tweets, Summarise Automatically and Query with SQL should give you an idea of how simple and focussed ScraperWiki tools are meant to be. The new ScraperWiki isn’t one monolithic app – it’s an ever-expanding collection of tools that interact and plug into each other to help you get your job done. I can’t wait to see more tools appear in the near future!

To find out more about how the new Beta is different from ScraperWiki Classic, check out our “What’s new” guide. And to report any bugs or missing features, raise an issue on our Github repo or email Zach, our community manager, at zach@scraperwiki.com.

Free ScraperWiki Community accounts. Come try us out: beta.scraperwiki.com.

]]>
https://blog.scraperwiki.com/2013/05/free-community-accounts/feed/ 7 758218693
From future import x.scraperwiki.com https://blog.scraperwiki.com/2013/03/from-future-import-x-scraperwiki-com/ https://blog.scraperwiki.com/2013/03/from-future-import-x-scraperwiki-com/#comments Tue, 19 Mar 2013 10:40:26 +0000 http://blog.scraperwiki.com/?p=758218165 Time flies when you’re building a platform.

At the start of the year, we announced the beginnings of a new, more powerful, more flexible ScraperWiki. More powerful because it exposes industry standards like SQL, SSH, and a persistent filesystem to developers, so they can scrape and crunch and export data pretty much however they like. More flexible because, at its heart, the new ScraperWiki is an ecosystem of end-user tools, enabling domain experts, managers, journalists, kittens to work with data without writing a line of code.

At the time, we were happy to announce all of our corporate Data Services customers were happily using the new platform (admittedly, with a few rough edges!). Lots has changed since then (seriously – take a look at the code!) and we’ve learnt a lot about how users from all sorts of different backgrounds expect to see, collect and interact with their data. As a guy with UX roots, this stuff is fascinating – perhaps something for future blogs posts!

Anyway, back to the future…

Last week, we invited our ‘Premium Account’ holders from ScraperWiki Classic, to come try the new ScraperWiki out. Each of them had their own private data hub, pre-installed with all of their Classic scrapers and views. And they all have access to a basic suite of tools for importing, visualising and exporting data (but there are far more to come).

Zarino's data hub

The feedback we’ve had so far has been really positive, so I wanted to say a big public thank you to everyone in this first tranche of users – you awesome, data-wrangling trail-blazers, you.

But we’re not standing still. Since our December announcement, we’ve collated a shortlist of early adopters: people who are pushing the boundary of what Classic can offer, or who have expressed interest in the new platform on our blog, mailing list, or Twitter. And once we’ve made some improvements, and put the finishing touches on our first set of end-user tools, we’ll be inviting them to put new ScraperWiki to the test.

If you’d like to be part of that early adopter shortlist, leave a comment below, or email new@scraperwiki.com. We’d love to have you on board.

]]>
https://blog.scraperwiki.com/2013/03/from-future-import-x-scraperwiki-com/feed/ 6 758218165
The next evolution of ScraperWiki https://blog.scraperwiki.com/2012/12/the-next-evolution-of-scraperwiki/ https://blog.scraperwiki.com/2012/12/the-next-evolution-of-scraperwiki/#comments Fri, 21 Dec 2012 15:49:12 +0000 http://blog.scraperwiki.com/?p=758217809 Quietly, over the last few months, we’ve been rebuilding both the backend and the frontend of ScraperWiki.

The new ScraperWiki has been built from the ground up to be more powerful for data scientists, and easier to use for everyone else. At its core, it’s about empowering people to take a hold of their data, to analyse it, combine it, and make value from it.

homepage

highrise-2dataset-2

We can’t wait to let you try it in January. In the meantime, however, we’re pleased to announce that all of our corporate customers are already migrating to the new ScraperWiki for scraping, storing and visualising their private datasets.

If you want data scraping, cleaning or analysing, then you can join them. Please get in touch. We’ve got a data hub and a team of data scientists itching to help.

]]>
https://blog.scraperwiki.com/2012/12/the-next-evolution-of-scraperwiki/feed/ 7 758217809