请输入您要查询的百科知识:

 

词条 QuickCode
释义

  1. Scrapers

  2. History

  3. See also

  4. References

  5. External links

{{Infobox website
| name = QuickCode
| logo =
| screenshot =
| collapsible =
| collapsetext =
| caption =
| url = {{URL|https://quickcode.io/}}
| alexa = {{DecreasePositive}} 133,089 ({{as of|2014|4|1|alt=April 2014}})[1]
| commercial =
| type =
| language = English
| registration =
| owner =
| author =
| launch date =
| current status = Active
| revenue = Sponsored by 4iP[2]
| content license = Affero General Public License[3]
}}QuickCode (formerly ScraperWiki) is a web-based platform for collaboratively building programs to extract and analyze public (online) data, in a wiki-like fashion. "Scraper" refers to screen scrapers, programs that extract data from websites. "Wiki" means that any user with programming experience can create or edit such programs for extracting new data, or for analyzing existing datasets.[2] The main use of the website is providing a place for programmers and journalists to collaborate on analyzing public data.[4][5][6][7][8][9]

The service was renamed circa 2016, as "it isn't a wiki or just for scraping any more".[10] At the same time, the eponymous parent company was renamed 'The Sensible Code Company'.[10]

Scrapers

Scrapers are created using a browser based IDE or by connecting via SSH to a server running Linux. They can be programmed using a variety of programming languages, including Perl, Python, Ruby, JavaScript and R.

History

ScraperWiki was founded in 2009 by Julian Todd and Aidan McGuire. It was initially funded by 4iP, the venture capital arm of TV station Channel 4. Since then, it has attracted an additional £1 Million round of funding from Enterprise Ventures.

Francis Irving is the chief executive officer of ScraperWiki.[11]

See also

  • Data driven journalism
  • Web scraping

References

1. ^{{cite web|url= http://www.alexa.com/siteinfo/scraperwiki.com |title= Scraperwiki.com Site Info | publisher= Alexa Internet |accessdate= 2014-04-01 }}
2. ^{{cite web |author=Jamie Arnold |date=2009-12-01 |title=4iP invests in ScraperWiki |publisher=4iP |url=http://www.4ip.org.uk/2009/12/4ip-invests-in-scraperwiki/ }}
3. ^{{cite web|url=https://github.com/sensiblecodeio/custard/blob/master/LICENCE|title=GNU Affero General Public License v3.0 - sensiblecodeio|website=GitHub|accessdate=30 December 2017}}
4. ^{{cite news |author=Cian Ginty |date=2010-11-19 |title=Hacks and hackers unite to get solid stories from difficult data |publisher=The Irish Times |url=http://www.irishtimes.com/newspaper/finance/2010/1119/1224283709384.html }}
5. ^{{cite web |author=Paul Bradshaw |date=2010-07-07 |title=An introduction to data scraping with Scraperwiki |publisher=Online Journalism Blog |url=http://onlinejournalismblog.com/2010/07/07/an-introduction-to-data-scraping-with-scraperwiki/ }}
6. ^{{cite news |author=Charles Arthur |date=2010-11-22 |title=Analysing data is the future for journalists, says Tim Berners-Lee |publisher=The Guardian |url=https://www.theguardian.com/media/2010/nov/22/data-analysis-tim-berners-lee }}
7. ^{{cite news |author=Deirdre McArdle |date=2010-11-19 |title=In The Papers 19 November |publisher=ENN |url=http://www.enn.ie/story/show/10125973 }}
8. ^{{cite web |date=2010-11-15 |title=Journalists and developers join forces for Lichfield ‘hack day’ |publisher=The Lichfield Blog |url=http://thelichfieldblog.co.uk/2010/11/15/journalists-and-developers-join-forces-for-lichfield-hack-day/ }}
9. ^{{cite news |author=Alison Spillane |date=2010-11-17 |title=Online tool helps to create greater public data transparency |publisher=Politico |url=http://politico.ie/index.php?option=com_content&view=article&id=6906:online-tool-helps-to-create-greater-public-data-transparency&catid=193:science-tech&Itemid=880 }}
10. ^{{cite web|url=https://scraperwiki.com/|title=ScraperWiki|work=ScraperWiki|accessdate=7 February 2017}}
11. ^{{cite web |url=http://blog.okfn.org/2012/03/09/from-cms-to-dms-c-is-for-content-d-is-for-data/ |authors=Francis Irving, Rufus Pollock |date=9 March 2012 |title=From CMS to DMS: C is for Content, D is for Data |work=Open Knowledge Blog }}

External links

  • {{Official website|http://quickcode.io/}}
  • [https://github.com/sensiblecodeio/custard github repository of custard]
{{wiki-stub}}

7 : Collaborative projects|Wikis|Social information processing|Web analytics|Mashup (web application hybrid)|Web scraping|Software using the GNU AGPL license

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/11/10 15:24:13