Job advert: Lead programmer
Oil wells, marathon results, planning applications…
ScraperWiki is a Silicon Valley style startup, in the North West of England, in Liverpool. We’re changing the world of open data, and how data science is done together on the Internet.
We’re looking for a programmer who’d like to:
- Revolutionise the tools for sharing data, and code that works with data, on the Internet.
- Take a lead in a lean startup, having good hunches on how to improve things, but not minding when A/B testing means axing weeks of code.
In terms of skills:
- Be polyglot enough to be able to learn Python, and do other languages (Ruby, Javascript…) where necessary.
- Be able to find one end of a clean web API and a test suite from another.
- We’re a small team, so need to be able to do some DevOps on Linux servers.
- Desirable – able to make igloos.
About ScraperWiki:
- We’ve got funding (Knight News Challenge winners) and are in the brand new field of “data hubs”.
- We’re before product/market fit, so it’ll be exciting, and you can end up a key, senior person in a growing company.
Some practical things:
We’d like this to end up a permanent position, but if you prefer we’re happy to do individual contracts to start with.
Must be willing to either relocate to Liverpool, or able to work from home and travel to our office here regularly (once a week). So somewhere nearby preferred.
To apply – send the following:
- A link to a previous project that you’ve worked on that you’re proud of, or a description of it if it isn’t publicly visible.
- A link to a scraper or view you’ve made on ScraperWiki, involving a dataset that you find interesting for some reason.
- Any questions you have about the job.
Along to francis@scraperwiki.com with the word swjob2 in the subject (and yes, that means no agencies, unless the candidates do that themselves)
Pool temperatures, company registrations, dairy prices…
What’s your revenue model? i.e. How do you monetize your value proposition? (Can’t believe I just typed that 2nd clarification question)
Hi David – good question!
We’ve currently got a feature called ‘vaults’ which is in use by a few clients. It lets you keep groups of scrapers in private, sharing them only with people who need to work on them or see them. We charge for that based on pages scraped: https://scraperwiki.com/pricing/
We also offer data services (mainly scraping!), to access those request data here: https://scraperwiki.com/request_data/
As the job advert says, we’re before product/market fit – we’re currently testing and iterating on our revenue model. It’s important that anyone coming to work for ScraperWiki understands that, and finds it exciting!
We’re certainly learning lots about how people use data, and how it can be made more valuable.