Hi! We've renamed ScraperWiki.
The product is now QuickCode and the company is The Sensible Code Company.

Attachment | ScraperWiki tried to serve both power users and programmers with the same product

One Response to “ScraperWiki tried to serve both power users and programmers with the same product”

  1. Greg Riice January 17, 2013 at 11:58 am #

    Hi ScraperWiki, Yes I’m an Anyone who has been unable to use your tools due to my non-real-programmer status. I hope to see more options soon. I saw your post suggesting review of Ms. Nardi’s book from 1993, along with your remarks, “we’re making. One unified platform that exposes the true power of a raw UNIX environment and a set of industry standards (like HTTP and SQL)” – yikes! Please translate that to Text and CSV files please, to avoid non-programmers’ immediate flight.
    I’ve struggled for years to handle weekly changing data for event information at 2200 venues (relatively stable set), but with data having to be scraped from approx. 12 different websites, 12 different UIs & navigation. My customers/site visitors don’t need database relationships, nor much search function – especially if I pre-sort the info and automatically present it based on geoIP location – so I’ve avoided the hardware memory requirements and time & labor & learning of anySQL and most noSQL tools.

    I was happy to discover that old standby Excel and even more recent Google spreadsheets offer SEEMING adequate interaction between web data and desktop processing (xpath/xquery & importXML) – though I’ve yet to complete an automated weekly routine.

    I’d love to work with ScraperWiki, but I’m somewhere between the requirements you’ve published for Community Manager and Data Manager – since I’m still struggling with wrangling some fairly low level data – but for the High value, under-served community of over 36 million US citizens who are Deaf and Hard of Hearing (UK figures are approx. 9 million DHH).

Leave a Reply

CAPTCHA Image

*

We're hiring!