This article is written like a personal reflection, personal essay, or argumentative essay that states a Wikipedia editor's personal feelings or presents an original argument about a topic. Please help improve it by rewriting it in an encyclopedic style. (September 2022) (Learn how and when to remove this template message)

This graphic symbolizes the use of ideas from a wide range of individuals, as used in crowdsourcing.
This graphic symbolizes the use of ideas from a wide range of individuals, as used in crowdsourcing.

Crowdsourcing involves a large group of dispersed participants contributing or producing goods or services—including ideas, votes, micro-tasks, and finances—for payment or as volunteers. Contemporary crowdsourcing often involves digital platforms to attract and divide work between participants to achieve a cumulative result. Crowdsourcing is not limited to online activity, however, and there are various historical examples of crowdsourcing. The word crowdsourcing is a portmanteau of "crowd" and "outsourcing".[1][2][3] In contrast to outsourcing, crowdsourcing usually involves less specific and more public groups of participants.[4][5][6]

Advantages of using crowdsourcing include lowered costs, improved speed, improved quality, increased flexibility, and/or increased scalability of the work, as well as promoting diversity.[7][8] Crowdsourcing methods include competitions, virtual labor markets, open online collaboration and data donation.[8][9][10] Some forms of crowdsourcing, such as in "idea competitions" or "innovation contests" provide ways for organizations to learn beyond the "base of minds" provided by their employees (e.g. LEGO Ideas).[11][12][promotion?] Commercial platforms, such as Amazon Mechanical Turk, match microtasks submitted by requesters to workers who perform them. Crowdsourcing is also used by nonprofit organizations to develop common goods, such as Wikipedia.[13]

Definitions

The term crowdsourcing was coined in 2006 by two editors at Wired, Jeff Howe and Mark Robinson, to describe how businesses were using the Internet to "outsource work to the crowd," which quickly led to the portmanteau "crowdsourcing".[14] Howe published a definition for the term in a blog post in June 2006:[15]

Simply defined, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.

Daren C. Brabham defined crowdsourcing as an "online, distributed problem-solving and production model."[16] Kristen L. Guth and Brabham found that the performance of ideas offered in crowdsourcing platforms are affected not only by their quality, but also by the communication among users about the ideas, and presentation in the platform itself.[17] After studying more than 40 definitions of crowdsourcing in the scientific and popular literature, Enrique Estellés-Arolas and Fernando González Ladrón-de-Guevara, researchers at the Technical University of Valencia, developed a new integrating definition:[3]

Crowdsourcing can either take an explicit or an implicit route. Explicit crowdsourcing lets users work together to evaluate, share, and build different specific tasks, while implicit crowdsourcing means that users solve a problem as a side effect of something else they are doing. With explicit crowdsourcing, users can evaluate particular items like books or webpages, or share by posting products or items. Users can also build artifacts by providing information and editing other people's work. Implicit crowdsourcing can take two forms: standalone and piggyback. Standalone allows people to solve problems as a side effect of the task they are doing, whereas piggyback takes users' information from a third-party website to gather information.

Despite the multiplicity of definitions for crowdsourcing, one constant has been the broadcasting of problems to the public, and an open call for contributions to help solve the problem.[original research?] Members of the public submit solutions that are then owned by the entity who originally broadcast the problem. In some cases, the contributor of the solution is compensated monetarily with prizes or public recognition. In other cases, the only rewards may be praise or intellectual satisfaction. Crowdsourcing may produce solutions from amateurs or volunteers working in their spare time, from experts, or from small businesses.[14]

Historical examples

This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages) This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (September 2022) (Learn how and when to remove this template message)This section contains embedded lists that may be poorly defined, unverified or indiscriminate. Please help to clean it up to meet Wikipedia's quality standards. Where appropriate, incorporate items into the main body of the article. (September 2022) (Learn how and when to remove this template message)

While the term "crowdsourcing" was popularized online to describe Internet-based activities,[16] some examples of projects, in retrospect, can be described as crowdsourcing.

Timeline of crowdsourcing examples

Early competitions

Crowdsourcing has often been used in the past as a competition to discover a solution. The French government proposed several of these competitions, often rewarded with Montyon Prizes.[31] These included the Leblanc process, or the Alkali prize, where a reward was provided for separating the salt from the alkali, and the Fourneyron's turbine, when the first hydraulic commercial turbine was developed.[32]

In response to a challenge from the French government, Nicolas Appert won a prize for inventing a new way of food preservation that involved sealing food in air-tight jars.[33] The British government provided a similar reward to find an easy way to determine a ship's longitude in the Longitude Prize. During the Great Depression, out-of-work clerks tabulated higher mathematical functions in the Mathematical Tables Project as an outreach project.[34][unreliable source?] One of the largest crowdsourcing campaigns was a public design contest in 2010 hosted by the Indian government's finance ministry to create a symbol for the Indian rupee. Thousands of people sent in entries before the government zeroed in on the final symbol based on the Devanagari script using the letter Ra.[35]

Applications

See also: List of crowdsourcing projects

A number of motivations exist for businesses to use crowdsourcing to accomplish their tasks. These include the ability to offload peak demand, access cheap labor and information, generate better results, access a wider array of talent than what is present in one organization, and undertake problems that would have been too difficult to solve internally.[36] Crowdsourcing allows businesses to submit problems on which contributors can work—on topics such as science, manufacturing, biotech, and medicine—optionally with monetary rewards for successful solutions. Although crowdsourcing complicated tasks can be difficult, simple work tasks[specify] can be crowdsourced cheaply and effectively.[37]

Crowdsourcing also has the potential to be a problem-solving mechanism for government and nonprofit use.[38] Urban and transit planning are prime areas for crowdsourcing. For example, from 2008 to 2009, a crowdsourcing project for transit planning in Salt Lake City was created to test the public participation process.[39] Another notable application of crowdsourcing for government problem-solving is Peer-to-Patent, which was an initiative to improve patent quality in the United States through gathering public input in a structured, productive manner.[40]

Researchers have used crowdsourcing systems such as the Mechanical Turk to aid their research projects by crowdsourcing some aspects of the research process, such as data collection, parsing, and evaluation to the public. Notable examples include using the crowd to create speech and language databases[41][42] and to conduct user studies.[43] Crowdsourcing systems provided researchers with the ability to gather large amounts of data, and helped researchers to collect data from populations and demographics they may not have access to locally.[44][failed verification]

Artists have also used crowdsourcing systems. In a project called the Sheep Market, Aaron Koblin used Mechanical Turk to collect 10,000 drawings of sheep from contributors around the world.[45] Artist Sam Brown leveraged the crowd by asking visitors of his website explodingdog to send him sentences to use as inspirations for his paintings.[46] Art curator Andrea Grover argues that individuals tend to be more open in crowdsourced projects because they are not being physically judged or scrutinized.[47] As with other types of uses, artists use crowdsourcing systems to generate and collect data. The crowd also can be used to provide inspiration and to collect financial support for an artist's work.[48]

In navigation systems, crowdsourcing from 100 million drivers were used by INRIX to collect users' driving times to provide better GPS routing and real-time traffic updates.[49]

In science

Astronomy

Crowdsourcing in astronomy was used in the early 19th century by astronomer Denison Olmsted. After being awakened in a late November night due to a meteor shower taking place, Olmsted noticed a pattern in the shooting stars. Olmsted wrote a brief report of this meteor shower in the local newspaper. "As the cause of 'Falling Stars' is not understood by meteorologists, it is desirable to collect all the facts attending this phenomenon, stated with as much precision as possible," Olmsted wrote to readers, in a report subsequently picked up and pooled to newspapers nationwide. Responses came pouring in from many states, along with scientists' observations sent to the American Journal of Science and Arts.[50] These responses helped him to make a series of scientific breakthroughs including observing the fact that meteor showers are seen nationwide and fall from space under the influence of gravity. The responses also allowed him to approximate a velocity for the meteors.[citation needed]

A more recent version of crowdsourcing in astronomy is NASA's photo organizing project,[51] which asked internet users to browse photos taken from space and try to identify the location the picture is documenting.[52]

Energy system research

Energy system models require large and diverse datasets, increasingly so given the trend towards greater temporal and spatial resolution.[53] In response, there have been several initiatives to crowdsource this data. Launched in December 2009, OpenEI is a collaborative website run by the US government that provides open energy data.[54][55] While much of its information is from US government sources, the platform also seeks crowdsourced input from around the world.[56] The semantic wiki and database Enipedia also publishes energy systems data using the concept of crowdsourced open information. Enipedia went live in March 2011.[57][58]: 184–188 

Genealogy research

Genealogical research used crowdsourcing techniques long before personal computers were common. Beginning in 1942, members of The Church of Jesus Christ of Latter-day Saints encouraged members to submit information about their ancestors. The submitted information was gathered together into a single collection. In 1969, to encourage more participation, the church started the three-generation program. In this program, church members were asked to prepare documented family group record forms for the first three generations. The program was later expanded to encourage members to research at least four generations and became known as the four-generation program.[59]

Institutes that have records of interest to genealogical research have used crowds of volunteers to create catalogs and indices to records.[citation needed]

Genetic genealogy research

Genetic genealogy is a combination of traditional genealogy with genetics. The rise of personal DNA testing, after the turn of the century, by companies such as Gene by Gene, FTDNA, GeneTree, 23andMe, and Ancestry.com, has led to public and semi public databases of DNA testing using crowdsourcing techniques. Citizen science projects have included support, organization, and dissemination of personal DNA (genetic) testing. Similar to amateur astronomy, citizen scientists encouraged by volunteer organizations like the International Society of Genetic Genealogy[60] have provided valuable information and research to the professional scientific community.[61] The Genographic Project, which began in 2005, is a research project carried out by the National Geographic Society's scientific team to reveal patterns of human migration using crowdsourced DNA testing and reporting of results.[62]

Ornithology

Another early example of crowdsourcing occurred in the field of ornithology. On 25 December 1900, Frank Chapman, an early officer of the National Audubon Society, initiated a tradition dubbed the "Christmas Day Bird Census". The project called birders from across North America to count and record the number of birds in each species they witnessed on Christmas Day. The project was successful, and the records from 27 different contributors were compiled into one bird census, which tallied around 90 species of birds.[63] This large-scale collection of data constituted an early form of citizen science, the premise upon which crowdsourcing is based. In the 2012 census, more than 70,000 individuals participated across 2,369 bird count circles.[64] Christmas 2014 marked the National Audubon Society's 115th annual Christmas Bird Count.

Seismology

The European-Mediterranean Seismological Centre (EMSC) has developed a seismic detection system by monitoring the traffic peaks on its website and analyzing keywords used on Twitter.[65]

In journalism

See also: Collaborative journalism and Citizen journalism

Crowdsourcing is increasingly used in professional journalism. Journalists are able to organize crowdsourced information by fact checking the information, and then using the information they have gathered in their articles as they see fit.[citation needed] A daily newspaper in Sweden has successfully used crowdsourcing in investigating the home loan interest rates in the country in 2013–2014, which resulted in over 50,000 submissions.[66] A daily newspaper in Finland crowdsourced an investigation into stock short-selling in 2011–2012, and the crowdsourced information led to revelations of a tax evasion system by a Finnish bank. The bank executive was fired and policy changes followed.[67] TalkingPointsMemo in the United States asked its readers to examine 3,000 emails concerning the firing of federal prosecutors in 2008. The British newspaper The Guardian crowdsourced the examination of hundreds of thousands of documents in 2009.[68]

Data donation

Data donation is a crowdsourcing approach to gather digital data. It is used by researchers and organizations to gain access to data from online platforms, websites, search engines and apps and devices. Data donation projects usually rely on participants volunteering their authentic digital profile information. Examples include:

In public policy

Crowdsourcing public policy and the production of public services is also referred to as citizen sourcing. While some scholars argue crowdsourcing for this purpose as a policy tool[75] or a definite means of co-production,[76] others question that and argue that crowdsourcing should be considered just as a technological enabler that simply increases speed and ease of participation.[77] Crowdsourcing can also play a role in democratization.[78]

The first conference focusing on Crowdsourcing for Politics and Policy took place at Oxford University, under the auspices of the Oxford Internet Institute in 2014. Research has emerged since 2012[79] which focused on the use of crowdsourcing for policy purposes.[80][81] These include experimentally investigating the use of Virtual Labor Markets for policy assessment,[82] and assessing the potential for citizen involvement in process innovation for public administration.[83]

Governments across the world are increasingly using crowdsourcing for knowledge discovery and civic engagement.[citation needed] Iceland crowdsourced their constitution reform process in 2011, and Finland has crowdsourced several law reform processes to address their off-road traffic laws. The Finnish government allowed citizens to go on an online forum to discuss problems and possible resolutions regarding some off-road traffic laws.[citation needed] The crowdsourced information and resolutions would then be passed on to legislators to refer to when making a decision, allowing citizens to contribute to public policy in a more direct manner.[84][85] Palo Alto crowdsources feedback for its Comprehensive City Plan update in a process started in 2015.[86] The House of Representatives in Brazil has used crowdsourcing in policy-reforms.[87]

NASA used crowdsourcing to analyze large sets of images. As part of the Open Government Initiative of the Obama Administration, the General Services Administration collected and amalgamated suggestions for improving federal websites.[87]

For part of the Obama and Trump Administrations, the We the People system collected signatures on petitions, which were entitled to an official response from the White House once a certain number had been reached. Several U.S. federal agencies ran inducement prize contests, including NASA and the Environmental Protection Agency.[88][87]

Language-related data

Crowdsourcing has been used extensively for gathering language-related data.

For dictionary work, crowdsourcing was applied over a hundred years ago by the Oxford English Dictionary editors using paper and postage. It has also been used for collecting examples of proverbs on a specific topic (e.g. religious pluralism) for a printed journal.[89] Crowdsourcing language-related data online has proven very effective and many dictionary compilation projects used crowdsourcing. It is used particularly for specialist topics and languages that are not well documented, such as for the Oromo language.[90] Software programs have been developed for crowdsourced dictionaries, such as WeSay.[91] A slightly different form of crowdsourcing for language data was the online creation of scientific and mathematical terminology for American Sign Language.[92]

In linguistics, crowdsourcing strategies have been applied to estimate word knowledge, vocabulary size, and word origin.[93] Implicit crowdsourcing on social media has also approximating sociolinguistic data efficiently. Reddit conversations in various location-based subreddits were analyzed for the presence of grammatical forms unique to a regional dialect. These were then used to map the extent of the speaker population. The results could roughly approximate large-scale surveys on the subject without engaging in field interviews.[94]

Mining publicly available social media conversations can be used as a form of implicit crowdsourcing to approximate the geographic extent of speaker dialects.[94] Proverb collection is also being done via crowdsourcing on the Web, most notably for the Pashto language of Afghanistan and Pakistan.[95][96][97] Crowdsourcing has been extensively used to collect high-quality gold standards for creating automatic systems in natural language processing (e.g. named entity recognition, entity linking).[98]

In product design

LEGO allows users to work on new product designs while conducting requirements testing. Any user can provide a design for a product, and other users can vote on the product. Once the submitted product has received 10,000 votes, it will be formally reviewed in stages and go into production with no impediments such as legal flaws identified. The creator receives royalties from the net income.[99]

In business

Homeowners can use Airbnb to list their accommodation or unused rooms. Owners set their own nightly, weekly and monthly rates and accommodations. The business, in turn, charges guests and hosts a fee. Guests usually end up spending between $9 and $15.[100] They have to pay a booking fee every time they book a room. The landlord, in turn, pays a service fee for the amount due. The company has 1,500 properties in 34,000 cities in more than 190 countries.[citation needed]

Other examples

Methods

Internet and digital technologies have massively expanded the opportunities for crowdsourcing. However, the effect of user communication and platform presentation can have a major bearing on the success of an online crowdsourcing project.[17] The crowdsourced problem can range from huge tasks (such as finding alien life or mapping earthquake zones) or very small (identifying images). Some examples of successful crowdsourcing themes are problems that bug people, things that make people feel good about themselves, projects that tap into niche knowledge of proud experts, and subjects that people find sympathetic.[116]

Crowdsourcing can either take an explicit or an implicit route:

In his 2013 book, Crowdsourcing, Daren C. Brabham puts forth a problem-based typology of crowdsourcing approaches:[118]

Ivo Blohm identifies four types of Crowdsourcing Platforms: Microtasking, Information Pooling, Broadcast Search, and Open Collaboration. They differ in the diversity and aggregation of contributions that are created. The diversity of information collected can either be homogenous or heterogenous. The aggregation of information can either be selective or integrative.[definition needed][119] Some common categories of crowdsourcing have been used effectively in the commercial world include crowdvoting, crowdsolving, crowdfunding, microwork, creative crowdsourcing, crowdsource workforce management, and inducement prize contests.[120]

Crowdvoting

Crowdvoting occurs when a website gathers a large group's opinions and judgments on a certain topic. Some crowdsourcing tools and platforms allow participants to rank each other's contributions, e.g. in answer to the question "What is one thing we can do to make Acme a great company?" One common method for ranking is "like" counting, where the contribution with the most "like" votes ranks first. This method is simple and easy to understand, but it privileges early contributions, which have more time to accumulate votes.[citation needed] In recent years, several crowdsourcing companies have begun to use pairwise comparisons backed by ranking algorithms. Ranking algorithms do not penalize late contributions.[citation needed] They also produce results quicker. Ranking algorithms have proven to be at least 10 times faster than manual stack ranking.[121] One drawback, however, is that ranking algorithms are more difficult to understand than vote counting.

The Iowa Electronic Market is a prediction market that gathers crowds' views on politics and tries to ensure accuracy by having participants pay money to buy and sell contracts based on political outcomes.[122] Some of the most famous examples have made use of social media channels: Domino's Pizza, Coca-Cola, Heineken, and Sam Adams have crowdsourced a new pizza, bottle design, beer, and song respectively.[123] A website called Threadless selected the T-shirts it sold by having users provide designs and vote on the ones they like, which are then printed and available for purchase.[16]

The California Report Card (CRC), a program jointly launched in January 2014 by the Center for Information Technology Research in the Interest of Society[124] and Lt. Governor Gavin Newsom, is an example of modern-day crowd voting. Participants access the CRC online and vote on six timely issues. Through principal component analysis, the users are then placed into an online "café" in which they can present their own political opinions and grade the suggestions of other participants. This system aims to effectively involve the greater public in relevant political discussions and highlight the specific topics with which people are most concerned.

Crowdvoting's value in the movie industry was shown when in 2009 a crowd accurately predicted the success or failure of a movie based on its trailer,[125][126] a feat that was replicated in 2013 by Google.[127]

On Reddit, users collectively rate web content, discussions and comments as well as questions posed to persons of interest in "AMA" and AskScience online interviews.[cleanup needed]

In 2017, Project Fanchise purchased a team in the Indoor Football League and created the Salt Lake Screaming Eagles, a fan run team. Using a mobile app, the fans voted on the day-to-day operations of the team, the mascot name, signing of players and even offensive play calling during games.[128]

Crowdfunding

Main article: Crowdfunding

Crowdfunding is the process of funding projects by a multitude of people contributing a small amount to attain a certain monetary goal, typically via the Internet.[129] Crowdfunding has been used for both commercial and charitable purposes.[130] The crowdfuding model that has been around the longest is rewards-based crowdfunding. This model is where people can prepurchase products, buy experiences, or simply donate. While this funding may in some cases go towards helping a business, funders are not allowed to invest and become shareholders via rewards-based crowdfunding.[131]

Individuals, businesses, and entrepreneurs can showcase their businesses and projects by creating a profile, which typically includes a short video introducing their project, a list of rewards per donation, and illustrations through images.[citation needed] Funders make monetary contribution for numerous reasons:

  1. They connect to the greater purpose of the campaign, such as being a part of an entrepreneurial community and supporting an innovative idea or product.[132]
  2. They connect to a physical aspect of the campaign like rewards and gains from investment.[132]
  3. They connect to the creative display of the campaign's presentation.
  4. They want to see new products before the public.[132]

The dilemma for equity crowdfunding in the US as of 2012 was during a refinement process for the regulations of the Securities and Exchange Commission, which had until 1 January 2013 to tweak the fundraising methods. The regulators were overwhelmed trying to regulate Dodd-Frank and all the other rules and regulations involving public companies and the way they traded. Advocates of regulation claimed that crowdfunding would open up the flood gates for fraud, called it the "wild west" of fundraising, and compared it to the 1980s days of penny stock "cold-call cowboys". The process allowed for up to $1 million to be raised without some of the regulations being involved. Companies under the then-current proposal would have exemptions available and be able to raise capital from a larger pool of persons, which can include lower thresholds for investor criteria, whereas the old rules required that the person be an "accredited" investor. These people are often recruited from social networks, where the funds can be acquired from an equity purchase, loan, donation, or ordering. The amounts collected have become quite high, with requests that are over a million dollars for software such as Trampoline Systems, which used it to finance the commercialization of their new software.[citation needed]

Inducement prize contests

Web-based idea competitions or inducement prize contests often consist of generic ideas, cash prizes, and an Internet-based platform to facilitate easy idea generation and discussion. An example of these competitions includes an event like IBM's 2006 "Innovation Jam", attended by over 140,000 international participants and yielded around 46,000 ideas.[133][134] Another example is the Netflix Prize in 2009. People were asked to come up with a recommendation algorithm that is more accurate than Netflix's current algorithm. It had a grand prize of US$1,000,000, and it was given to a team which designed an algorithm that beat Netflix's own algorithm for predicting ratings by 10.06%.[citation needed]

Another example of competition-based crowdsourcing is the 2009 DARPA balloon experiment, where DARPA placed 10 balloon markers across the United States and challenged teams to compete to be the first to report the location of all the balloons. A collaboration of efforts was required to complete the challenge quickly and in addition to the competitive motivation of the contest as a whole, the winning team (MIT, in less than nine hours) established its own "collaborapetitive" environment to generate participation in their team.[135] A similar challenge was the Tag Challenge, funded by the US State Department, which required locating and photographing individuals in five cities in the US and Europe within 12 hours based only on a single photograph. The winning team managed to locate three suspects by mobilizing volunteers worldwide using a similar incentive scheme to the one used in the balloon challenge.[136]

Using open innovation platforms is an effective way to crowdsource people's thoughts and ideas for research and development. The company InnoCentive is a crowdsourcing platform for corporate research and development where difficult scientific problems are posted for crowds of solvers to discover the answer and win a cash prize that ranges from $10,000 to $100,000 per challenge.[16] InnoCentive, of Waltham, Massachusetts, and London, England, provides access to millions of scientific and technical experts from around the world. The company claims a success rate of 50% in providing successful solutions to previously unsolved scientific and technical problems. The X Prize Foundation creates and runs incentive competitions offering between $1 million and $30 million for solving challenges. Local Motors is another example of crowdsourcing, and it is a community of 20,000 automotive engineers, designers, and enthusiasts that compete to build off-road rally trucks.[137]

Implicit crowdsourcing

Implicit crowdsourcing is less obvious because users do not necessarily know they are contributing, yet can still be very effective in completing certain tasks.[citation needed] Rather than users actively participating in solving a problem or providing information, implicit crowdsourcing involves users doing another task entirely where a third party gains information for another topic based on the user's actions.[16]

A good example of implicit crowdsourcing is the ESP game, where users find words to describe Google images, which are then used as metadata for the images. Another popular use of implicit crowdsourcing is through reCAPTCHA, which asks people to solve CAPTCHAs to prove they are human, and then provides CAPTCHAs from old books that cannot be deciphered by computers, to digitize them for the web. Like many tasks solved using the Mechanical Turk, CAPTCHAs are simple for humans, but often very difficult for computers.[117]

Piggyback crowdsourcing can be seen most frequently by websites such as Google that data-mine a user's search history and websites to discover keywords for ads, spelling corrections, and finding synonyms. In this way, users are unintentionally helping to modify existing systems, such as Google Ads.[43]

Other types

Demographics of the crowd

The crowd is an umbrella term for the people who contribute to crowdsourcing efforts. Though it is sometimes difficult to gather data about the demographics of the crowd, a study by Ross et al. surveyed the demographics of a sample of the more than 400,000 registered crowdworkers using Amazon Mechanical Turk to complete tasks for pay. A previous study in 2008 by Ipeirotis found that users at that time were primarily American, young, female, and well-educated, with 40% earning more than $40,000 per year. In November 2009, Ross found a very different Mechanical Turk population where 36% of which was Indian. Two-thirds of Indian workers were male, and 66% had at least a bachelor's degree. Two-thirds had annual incomes less than $10,000, with 27% sometimes or always depending on income from Mechanical Turk to make ends meet.[155]

The average US user of Mechanical Turk earned $2.30 per hour for tasks in 2009, versus $1.58 for the average Indian worker.[citation needed] While the majority of users worked less than five hours per week, 18% worked 15 hours per week or more. This is less than minimum wage in the United States (but not in India), which Ross suggests raises ethical questions for researchers who use crowdsourcing.

The demographics of Microworkers.com differ from Mechanical Turk in that the US and India together accounting for only 25% of workers; 197 countries are represented among users, with Indonesia (18%) and Bangladesh (17%) contributing the largest share. However, 28% of employers are from the US.[156]

Another study of the demographics of the crowd at iStockphoto found a crowd that was largely white, middle- to upper-class, higher educated, worked in a so-called "white-collar job" and had a high-speed Internet connection at home.[157] In a crowd-sourcing diary study of 30 days in Europe, the participants were predominantly higher educated women.[115]

Studies have also found that crowds are not simply collections of amateurs or hobbyists. Rather, crowds are often professionally trained in a discipline relevant to a given crowdsourcing task and sometimes hold advanced degrees and many years of experience in the profession.[157][158][159][160] Claiming that crowds are amateurs, rather than professionals, is both factually untrue and may lead to marginalization of crowd labor rights.[161]

Gregory Saxton et al. studied the role of community users, among other elements, during his content analysis of 103 crowdsourcing organizations. They developed a taxonomy of nine crowdsourcing models (intermediary model, citizen media production, collaborative software development, digital goods sales, product design, peer-to-peer social financing, consumer report model, knowledge base building model, and collaborative science project model) in which to categorize the roles of community users, such as researcher, engineer, programmer, journalist, graphic designer, etc., and the products and services developed.[162]

Motivations

Further information: Online participation § Motivations

Contributors

Many researchers suggest that both intrinsic and extrinsic motivations cause people to contribute to crowdsourced tasks and these factors influence different types of contributors.[85][157][158][160][163][164][165][166] For example, people employed in a full-time position rate human capital advancement as less important than part-time workers do, while women rate social contact as more important than men do.[164]

Intrinsic motivations are broken down into two categories: enjoyment-based and community-based motivations. Enjoyment-based motivations refer to motivations related to the fun and enjoyment contributors experience through their participation. These motivations include: skill variety, task identity, task autonomy, direct feedback from the job, and taking the job as a pastime.[citation needed] Community-based motivations refer to motivations related to community participation, and include community identification and social contact. In crowdsourced journalism, the motivation factors are intrinsic: the crowd is driven by a possibility to make social impact, contribute to social change, and help their peers.[163]

Extrinsic motivations are broken down into three categories: immediate payoffs, delayed payoffs, and social motivations. Immediate payoffs, through monetary payment, are the immediately received compensations given to those who complete tasks. Delayed payoffs are benefits that can be used to generate future advantages, such as training skills and being noticed by potential employers. Social motivations are the rewards of behaving pro-socially,[167] such as the altruistic motivations of online volunteers. Chandler and Kapelner found that US users of the Amazon Mechanical Turk were more likely to complete a task when told they were going to help researchers identify tumor cells, than when they were not told the purpose of their task. However, of those who completed the task, quality of output did not depend on the framing.[168]

Motivation factors in crowdsourcing are often a mix of intrinsic and extrinsic factors.[169] In a crowdsourced law-making project, the crowd was motivated by both intrinsic and extrinsic factors. Intrinsic motivations included fulfilling civic duty, affecting the law for sociotropic reasons, to deliberate with and learn from peers. Extrinsic motivations included changing the law for financial gain or other benefits. Participation in crowdsourced policy-making was an act of grassroots advocacy, whether to pursue one's own interest or more altruistic goals, such as protecting nature.[170]

Another form of social motivation is prestige or status. The International Children's Digital Library recruited volunteers to translate and review books. Because all translators receive public acknowledgment for their contributions, Kaufman and Schulz cite this as a reputation-based strategy to motivate individuals who want to be associated with institutions that have prestige. The Mechanical Turk uses reputation as a motivator in a different sense, as a form of quality control. Crowdworkers who frequently complete tasks in ways judged to be inadequate can be denied access to future tasks, providing motivation to produce high-quality work.[171]

Despite the potential global reach of IT applications online, recent research illustrates that differences in location[which?] affect participation outcomes in IT-mediated crowds.[172]

Limitations and controversies

At least six major topics cover the limitations and controversies about crowdsourcing:

  1. Impact of crowdsourcing on product quality
  2. Entrepreneurs contribute less capital themselves
  3. Increased number of funded ideas
  4. The value and impact of the work received from the crowd
  5. The ethical implications of low wages paid to workers
  6. Trustworthiness and informed decision making

Impact of crowdsourcing on product quality

Crowdsourcing allows anyone to participate, allowing for many unqualified participants and resulting in large quantities of unusable contributions. Companies, or additional crowdworkers, then have to sort through the low-quality contributions. The task of sorting through crowdworkers' contributions, along with the necessary job of managing the crowd, requires companies to hire actual employees, thereby increasing management overhead.[173] For example, susceptibility to faulty results can be caused by targeted, malicious work efforts. Since crowdworkers completing microtasks are paid per task, a financial incentive often causes workers to complete tasks quickly rather than well.[citation needed] Verifying responses is time-consuming, so employers often depend on having multiple workers complete the same task to correct errors. However, having each task completed multiple times increases time and monetary costs.[174]

Crowdsourcing quality is also impacted by task design. Lukyanenko et al.[175] argue that, the prevailing practice of modeling crowdsourcing data collection tasks in terms of fixed classes (options), unnecessarily restricts quality. Results demonstrate that information accuracy depends on the classes used to model domains, with participants providing more accurate information when classifying phenomena at a more general level (which is typically less useful to sponsor organizations, hence less common).[clarification needed] Further, greater overall accuracy is expected when participants could provide free-form data compared to tasks in which they select from constrained choices.

Just as limiting, oftentimes there is not enough skills or expertise in the crowd to successfully accomplish the desired task. While this scenario does not affect "simple" tasks such as image labeling, it is particularly problematic for more complex tasks, such as engineering design or product validation. A comparison between the evaluation of business models from experts and an anonymous online crowd showed that an anonymous online crowd cannot evaluate business models to the same level as experts.[176] In these cases, it may be difficult or even impossible to find qualified people in the crowd, as their responses represent only a small fraction of the workers compared to consistent, but incorrect crowd members.[177] However, if the task is "intermediate" in its difficulty, estimating crowdworkers' skills and intentions and leveraging them for inferring true responses works well,[178] albeit with an additional computation cost.[citation needed]

Crowdworkers are a nonrandom sample of the population. Many researchers use crowdsourcing to quickly and cheaply conduct studies with larger sample sizes than would be otherwise achievable. However, due to limited access to the Internet, participation in low developed countries is relatively low. Participation in highly developed countries is similarly low, largely because the low amount of pay is not a strong motivation for most users in these countries. These factors lead to a bias in the population pool towards users in medium developed countries, as deemed by the human development index.[179]

The likelihood that a crowdsourced project will fail due to lack of monetary motivation or too few participants increases over the course of the project. Tasks that are not completed quickly may be forgotten, buried by filters and search procedures. This results in a long-tail power law distribution of completion times.[180] Additionally, low-paying research studies online have higher rates of attrition, with participants not completing the study once started.[44] Even when tasks are completed, crowdsourcing does not always produce quality results. When Facebook began its localization program in 2008, it encountered some criticism for the low quality of its crowdsourced translations.[181]

One of the problems of crowdsourcing products is the lack of interaction between the crowd and the client. Usually little information is known about the final product, and workers rarely interacts with the final client in the process. This can decrease the quality of product as client interaction is considered to be a vital part of the design process.[182]

An additional cause of the decrease in product quality that can result from crowdsourcing is the lack of collaboration tools. In a typical workplace, coworkers are organized in such a way that they can work together and build upon each other's knowledge and ideas. Furthermore, the company often provides employees with the necessary information, procedures, and tools to fulfill their responsibilities. However, in crowdsourcing, crowd-workers are left to depend on their own knowledge and means to complete tasks.[173]

A crowdsourced project is usually expected to be unbiased by incorporating a large population of participants with a diverse background. However, most of the crowdsourcing works are done by people who are paid or directly benefit from the outcome (e.g. most of open source projects working on Linux). In many other cases, the end product is the outcome of a single person's endeavor, who creates the majority of the product, while the crowd only participates in minor details.[183]

Entrepreneurs contribute less capital themselves

To make an idea turn into a reality, the first component needed is capital. Depending on the scope and complexity of the crowdsourced project, the amount of necessary capital can range from a few thousand dollars to hundreds of thousands, if not more. The capital-raising process can take from days to months depending on different variables, including the entrepreneur's network and the amount of initial self-generated capital.[citation needed]

The crowdsourcing process allows entrepreneurs to access a wide range of investors who can take different stakes in the project.[184] As an effect, crowdsourcing simplifies the capital-raising process and allows entrepreneurs to spend more time on the project itself and reaching milestones rather than dedicating time to get it started. Overall, the simplified access to capital can save time to start projects and potentially increase the efficiency of projects.[citation needed]

Others argue that easier access to capital through a large number of smaller investors can hurt the project and its creators. With a simplified capital-raising process involving more investors with smaller stakes, investors are more risk-seeking because they can take on an investment size with which they are comfortable.[184] This leads to entrepreneurs losing possible experience convincing investors who are wary of potential risks in investing because they do not depend on one single investor for the survival of their project. Instead of being forced to assess risks and convince large institutional investors on why their project can be successful, wary investors can be replaced by others who are willing to take on the risk.

Some translation companies and translation tool consumers pretend to use crowdsourcing as a means for drastically cutting costs, instead of hiring professional translators. This situation has been systematically denounced by IAPTI and other translator organizations.[185]

Increased number of funded ideas

The raw number of ideas that get funded and the quality of the ideas is a large controversy over the issue of crowdsourcing.

Proponents argue that crowdsourcing is beneficial because it allows the formation of startups with niche ideas that would not survive venture capitalist or angel funding, which areoftentimes the primary investors in startups. Many ideas are scrapped in their infancy due to insufficient support and lack of capital, but crowdsourcing allows these ideas to be started if an entrepreneur can find a community to take interest in the project.[186]

Crowdsourcing allows those who would benefit from the project to fund and become a part of it, which is one way for small niche ideas get started.[187] However, when the number of projects grows, the number of failures also increases. Crowdsourcing assists the development of niche and high-risk projects due to a perceived need from a select few who seek the product. With high risk and small target markets, the pool of crowdsourced projects faces a greater possible loss of capital, lower return, and lower levels of success.[188]

Concerns

Because crowdworkers are considered independent contractors rather than employees, they are not guaranteed minimum wage. In practice, workers using the Amazon Mechanical Turk generally earn less than the minimum wage. In 2009, it was reported that United States Turk users earned an average of $2.30 per hour for tasks, while users in India earned an average of $1.58 per hour, which is below minimum wage in the United States (but not in India).[155][189] In 2018, a survey of 2,676 Amazon Mechanical Turk workers doing 3.8 million tasks found that the median hourly wage was approximately $2 per hour, and only 4% of workers earned more than the federal minimum wage of $7.25 per hour.[190] Some researchers who have considered using Mechanical Turk to get participants for research studies have argued that the wage conditions might be unethical.[44][191] However, according to other research, workers on Amazon Mechanical Turk do not feel they are exploited and are ready to participate in crowdsourcing activities in the future.[192] When Facebook began its localization program in 2008, it received criticism for using free labor in crowdsourcing the translation of site guidelines.[181]

Typically, no written contracts, nondisclosure agreements, or employee agreements are made with crowdworkers. For users of the Amazon Mechanical Turk, this means that employers decide whether users' work is acceptable and reserve the right to withhold pay if it does not meet their standards.[193] Critics say that crowdsourcing arrangements exploit individuals in the crowd, and a call has been made for crowds to organize for their labor rights.[194][161][195]

Collaboration between crowd members can also be difficult or even discouraged, especially in the context of competitive crowd sourcing. Crowdsourcing site InnoCentive allows organizations to solicit solutions to scientific and technological problems; only 10.6% of respondents reported working in a team on their submission.[158] Amazon Mechanical Turk workers collaborated with academics to create a platform, WeAreDynamo.org, that allows them to organize and create campaigns to better their work situation, but unfortunately the site is no longer running.[196] Another platform run by Amazon Mechanical Turk workers and academics, Turkopticon, continues to operate and provides worker reviews on Amazon Mechanical Turk employers.[197]

America Online settled the case Hallissey et al v. America Online, Inc. for $15 million in 2009, after unpaid moderators sued to be paid the minimum wage as employees under the U.S. Fair Labor Standards Act.

See also

References

  1. ^ Schenk, Eric; Guittard, Claude (1 January 2009). "Crowdsourcing What can be Outsourced to the Crowd and Why". Center for Direct Scientific Communication. Retrieved 1 October 2018 – via HAL. ((cite journal)): Cite journal requires |journal= (help)
  2. ^ Hirth, Matthias; Hoßfeld, Tobias; Tran-Gia, Phuoc (2011). "Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com" (PDF). 2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing. pp. 322–329. doi:10.1109/IMIS.2011.89. ISBN 978-1-61284-733-7. S2CID 12955095. Archived from the original (PDF) on 22 November 2015. Retrieved 5 September 2015.
  3. ^ a b Estellés-Arolas, Enrique; González-Ladrón-de-Guevara, Fernando (2012), "Towards an Integrated Crowdsourcing Definition" (PDF), Journal of Information Science, 38 (2): 189–200, doi:10.1177/0165551512437638, hdl:10251/56904, S2CID 18535678
  4. ^ Brabham, D. C. (2013). Crowdsourcing. Cambridge, Massachusetts; London, England: The MIT Press.
  5. ^ Brabham, D. C. (2008). "Crowdsourcing as a Model for Problem Solving an Introduction and Cases". Convergence: The International Journal of Research into New Media Technologies. 14 (1): 75–90. CiteSeerX 10.1.1.175.1623. doi:10.1177/1354856507084420. S2CID 145310730.
  6. ^ Prpić, J., & Shukla, P. (2016). Crowd Science: Measurements, Models, and Methods. In Proceedings of the 49th Annual Hawaii International Conference on System Sciences, Kauai, Hawaii: IEEE Computer Society
  7. ^ Buettner, Ricardo (2015). A Systematic Literature Review of Crowdsourcing Research from a Human Resource Management Perspective. 48th Annual Hawaii International Conference on System Sciences. Kauai, Hawaii: IEEE. pp. 4609–4618. doi:10.13140/2.1.2061.1845. ISBN 978-1-4799-7367-5.
  8. ^ a b Prpić, John; Taeihagh, Araz; Melton, James (September 2015). "The Fundamentals of Policy Crowdsourcing". Policy & Internet. 7 (3): 340–361. arXiv:1802.04143. doi:10.1002/poi3.102. S2CID 3626608.
  9. ^ Afuah, A.; Tucci, C. L. (2012). "Crowdsourcing as a Solution to Distant Search". Academy of Management Review. 37 (3): 355–375. doi:10.5465/amr.2010.0146.
  10. ^ de Vreede, T., Nguyen, C., de Vreede, G. J., Boughzala, I., Oh, O., & Reiter-Palmon, R. (2013). A Theoretical Model of User Engagement in Crowdsourcing. In Collaboration and Technology (pp. 94-109). Springer Berlin Heidelberg
  11. ^ a b Liu, Wei; Moultrie, James; Ye, Songhe (4 May 2019). "The Customer-Dominated Innovation Process: Involving Customers as Designers and Decision-Makers in Developing New Product". The Design Journal. 22 (3): 299–324. doi:10.1080/14606925.2019.1592324. ISSN 1460-6925. S2CID 145931864.
  12. ^ Schlagwein, Daniel; Bjørn-Andersen, Niels (2014), "Organizational Learning with Crowdsourcing: The Revelatory Case of LEGO" (PDF), Journal of the Association for Information Systems, 15 (11): 754–778, doi:10.17705/1jais.00380
  13. ^ Taeihagh, Araz (19 June 2017). "Crowdsourcing, Sharing Economies, and Development". Journal of Developing Societies. 33 (2): 0169796X1771007. arXiv:1707.06603. doi:10.1177/0169796x17710072. S2CID 32008949.
  14. ^ a b c Howe, Jeff (2006). "The Rise of Crowdsourcing". Wired.
  15. ^ Howe, Jeff (2 June 2006). "Crowdsourcing: A Definition". Crowdsourcing Blog. Retrieved 2 January 2013.
  16. ^ a b c d e Brabham, Daren (2008), "Crowdsourcing as a Model for Problem Solving: An Introduction and Cases" (PDF), Convergence: The International Journal of Research into New Media Technologies, 14 (1): 75–90, CiteSeerX 10.1.1.175.1623, doi:10.1177/1354856507084420, S2CID 145310730, archived from the original (PDF) on 2 August 2012
  17. ^ a b Guth, Kristen L.; Brabham, Daren C. (4 August 2017). "Finding the diamond in the rough: Exploring communication and platform in crowdsourcing performance". Communication Monographs. 84 (4): 510–533. doi:10.1080/03637751.2017.1359748. ISSN 0363-7751. S2CID 54045924.
  18. ^ a b c d e f g h "A Brief History of Crowdsourcing [Infographic]". Crowdsourcing.org. 18 March 2012. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
  19. ^ Hern, Chester G.(2002). Tracks in the Sea, p. 123 & 246. McGraw Hill. ISBN 0-07-136826-4.
  20. ^ "Smithsonian Crowdsourcing Since 1849". Smithsonian Institution Archives. 14 April 2011. Retrieved 24 August 2018.
  21. ^ Clark, Catherine E. (25 April 1970). "'C'était Paris en 1970'". Études Photographiques (31). Retrieved 2 July 2015.
  22. ^ Axelrod R. (1980), "'Effective choice in the Prisoner's Dilemma'", Journal of Conflict Resolution, 24 (1): 3−25, doi:10.1177/002200278002400101, S2CID 143112198
  23. ^ "UNV Online Volunteering Service | History". Onlinevolunteering.org. Archived from the original on 2 July 2015. Retrieved 2 July 2015.
  24. ^ "Wired 14.06: The Rise of Crowdsourcing". Archive.wired.com. 4 January 2009. Retrieved 2 July 2015.
  25. ^ Lih, Andrew (2009). The Wikipedia revolution: how a bunch of nobodies created the world's greatest encyclopedia (1st ed.). New York: Hyperion. ISBN 978-1401303716.
  26. ^ Lakhani KR, Garvin DA, Lonstein E (January 2010). "TopCoder (A): Developing Software through Crowdsourcing". Harvard Business School Case: 610–032.
  27. ^ Phadnisi, Shilpa (21 October 2016). "Appirio's TopCoder too is a big catch for Wipro". The Times of India. Retrieved 30 April 2018.
  28. ^ a b "For The Love Of Open Mapping Data". TechCrunch. 9 August 2014. Retrieved 23 July 2019.
  29. ^ a b "Crowdsourcing Back-Up Timeline Early Stories". Archived from the original on 29 November 2014.[better source needed]
  30. ^ Garrigos-Simon, Fernando J.; Gil-Pechuán, Ignacio; Estelles-Miguel, Sofia (2015). Advances in Crowdsourcing. Springer. ISBN 9783319183411.
  31. ^ "Antoine-Jean-Baptiste-Robert Auget, Baron de Montyon". New Advent. Retrieved 25 February 2012.
  32. ^ "It Was All About Alkali". Chemistry Chronicles. Retrieved 25 February 2012.
  33. ^ "Nicolas Appert". John Blamire. Retrieved 25 February 2012.
  34. ^ "9 Examples of Crowdsourcing, Before 'Crowdsourcing' Existed". MemeBurn. 15 September 2011. Retrieved 25 February 2012.
  35. ^ Pande, Shamni (25 May 2013). "The People Know Best". Business Today. India: Living Media India Limited.
  36. ^ Noveck, Beth Simone (2009), Wiki Government: How Technology Can Make Government Better, Democracy Stronger, and Citizens More Powerful, Brookings Institution Press
  37. ^ Sarasua, Cristina; Simperl, Elena; Noy, Natalya F. (2012), "Crowdsourcing Ontology Alignment with Microtasks" (PDF), Institute AIFB. Karlsruhe Institute of Technology: 2
  38. ^ Hollow, Matthew (20 April 2013). "Crowdfunding and Civic Society in Europe: A Profitable Partnership?". Open Citizenship. Retrieved 29 April 2013.
  39. ^ Federal Transit Administration Public Transportation Participation Pilot Program, U.S. Department of Transportation, archived from the original on 7 January 2009
  40. ^ Peer-to-Patent Community Patent Review Project, Peer-to-Patent Community Patent Review Project
  41. ^ Callison-Burch, C.; Dredze, M. (2010), "Creating Speech and Language Data With Amazon's Mechanical Turk" (PDF), Human Language Technologies Conference: 1–12, archived from the original (PDF) on 2 August 2012, retrieved 28 February 2012
  42. ^ McGraw, I.; Seneff, S. (2011), "Growing a Spoken Language Interface on Amazon Mechanical Turk" (PDF), Interspeech: 3057–3060, doi:10.21437/Interspeech.2011-765
  43. ^ a b Kittur, A.; Chi, E.H.; Sun, B. (2008), "Crowdsourcing user studies with Mechanical Turk" (PDF), Chi 2008
  44. ^ a b c Mason, W.; Suri, S. (2010), "Conducting Behavioral Research on Amazon's Mechanical Turk", Behavior Research Methods, SSRN 1691163
  45. ^ Koblin, A. (2008), "The sheep market", Creativity and Cognition: 451, doi:10.1145/1640233.1640348, ISBN 9781605588650, S2CID 20609292
  46. ^ "explodingdog 2015". Explodingdog.com. Retrieved 2 July 2015.
  47. ^ DeVun, Leah (19 November 2009). "Looking at how crowds produce and present art". Wired News. Archived from the original on 24 October 2012. Retrieved 26 February 2012.
  48. ^ Linver, D. (2010), Crowdsourcing and the Evolving Relationship between Art and Artist, archived from the original on 14 July 2014, retrieved 28 February 2012
  49. ^ "Why". INRIX.com. 13 September 2014. Archived from the original on 12 October 2014. Retrieved 2 July 2015.
  50. ^ Vergano, Dan (30 August 2014). "1833 Meteor Storm Started Citizen Science". National Geographic. StarStruck. Retrieved 18 September 2014.
  51. ^ "Gateway to Astronaut Photography of Earth". NASA.
  52. ^ McLaughlin, Elliot. "Image Overload: Help us sort it all out, NASA requests". Cnn.com. CNN. Retrieved 18 September 2014.
  53. ^ Després, Jacques; Hadjsaid, Nouredine; Criqui, Patrick; Noirot, Isabelle (1 February 2015). "Modelling the impacts of variable renewable sources on the power sector: reconsidering the typology of energy modelling tools". Energy. 80: 486–495. doi:10.1016/j.energy.2014.12.005. ISSN 0360-5442.
  54. ^ "OpenEI — Energy Information, Data, and other Resources". OpenEI. Retrieved 26 September 2016.
  55. ^ Garvin, Peggy (12 December 2009). "New Gateway: Open Energy Info". SLA Government Information Division. Dayton, OH, USA. Retrieved 26 September 2016.[permanent dead link]
  56. ^ Brodt-Giles, Debbie (2012). WREF 2012: OpenEI — an open energy data and information exchange for international audiences (PDF). Golden, CO, USA: National Renewable Energy Laboratory (NREL). Archived from the original (PDF) on 9 October 2016. Retrieved 24 September 2016.
  57. ^ Davis, Chris; Chmieliauskas, Alfredas; Dijkema, Gerard; Nikolic, Igor. "Enipedia". Delft, The Netherlands: Energy and Industry group, Faculty of Technology, Policy and Management, TU Delft. Archived from the original on 10 June 2014. Retrieved 7 October 2016.((cite web)): CS1 maint: unfit URL (link)
  58. ^ Davis, Chris (2012). Making sense of open data: from raw data to actionable insight — PhD thesis. Delft, The Netherlands: Delft University of Technology. Retrieved 2 October 2018.Chapter 9 discusses in depth the initial development of Enipedia.
  59. ^ "What Is the Four-Generation Program?". The Church of Jesus Christ of Latter-day Saints. Retrieved 30 January 2012.
  60. ^ King, Turi E.; Jobling, Mark A. (2009). "What's in a name? Y chromosomes, surnames and the genetic genealogy revolution". Trends in Genetics. 25 (8): 351–60. doi:10.1016/j.tig.2009.06.003. hdl:2381/8106. PMID 19665817. The International Society of Genetic Genealogy (www.isogg.org) advocates the use of genetics as a tool for genealogical research, and provides a support network for genetic genealogists. It hosts the ISOGG Y-haplogroup tree, which has the virtue of being regularly updated.
  61. ^ Mendex, etc. al., Fernando (28 February 2013). "An African American Paternal Lineage Adds an Extremely Ancient Root to the Human Y Chromosome Phylogenetic Tree". The American Journal of Human Genetics. 92 (3): 454–459. doi:10.1016/j.ajhg.2013.02.002. PMC 3591855. PMID 23453668.
  62. ^ Wells, Spencer (2013). "The Genographic Project and the Rise of Citizen Science". Southern California Genealogical Society (SCGS). Archived from the original on 10 July 2013. Retrieved 10 July 2013.
  63. ^ "History of the Christmas Bird Count | Audubon". Birds.audubon.org. 22 January 2015. Retrieved 2 July 2015.
  64. ^ "Thank you!". Audubon. 5 October 2017. Archived from the original on 24 August 2014.
  65. ^ "Home - ISCRAM2015 - University of Agder" (PDF). iscram2015.uia.no. Archived from the original (PDF) on 17 October 2016. Retrieved 14 October 2016.
  66. ^ Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change and Peer-Learning". International Journal of Communication. 9: 3523–3543.
  67. ^ Aitamurto, Tanja (2016). "Crowdsourcing as a Knowledge-Search Method in Digital Journalism: Ruptured Ideals and Blended Responsibility". Digital Journalism. 4 (2): 280–297. doi:10.1080/21670811.2015.1034807. S2CID 156243124.
  68. ^ Aitamurto, Tanja (2013). "Balancing between open and closed: co-creation in magazine journalism". Digital Journalism. 1 (2): 229–251. doi:10.1080/21670811.2012.750150. S2CID 62882093.
  69. ^ "Algorithm Watch". Algorithm Watch. 2022. Retrieved 18 May 2022.
  70. ^ "Overview in English". DataSkop. 2022. Retrieved 18 May 2022.
  71. ^ "Mozilla Rally". Mozilla Rally. Retrieved 18 May 2022.
  72. ^ Angus, Daniel (16 February 2022). "A data economy: the case for doing and knowing more about algorithms". Crikey. Retrieved 24 March 2022.
  73. ^ Burgess, Jean; Angus, Daniel; Carah, Nicholas; Andrejevic, Mark; Hawker, Kiah; Lewis, Kelly; Obeid, Abdul; Smith, Adam; Tan, Jane; Fordyce, Robbie; Trott, Verity (8 November 2021). "Critical simulation as hybrid digital method for exploring the data operations and vernacular cultures of visual social media platforms". SocArXiv. doi:10.31235/osf.io/2cwsu. S2CID 243837581.
  74. ^ The Markup (2022). "The Citizen Browser Project—Auditing the Algorithms of Disinformation". The Markup. Retrieved 18 May 2022.
  75. ^ Smith, Graham; Richards, Robert C.; Gastil, John (12 May 2015). "The Potential ofParticipediaas a Crowdsourcing Tool for Comparative Analysis of Democratic Innovations" (PDF). Policy & Internet. 7 (2): 243–262. doi:10.1002/poi3.93. ISSN 1944-2866.
  76. ^ Moon, M. Jae (2018). "Evolution of co-production in the information age: crowdsourcing as a model of web-based co-production in Korea". Policy and Society. 37 (3): 294–309. doi:10.1080/14494035.2017.1376475. ISSN 1449-4035. S2CID 158440300.
  77. ^ Taeihagh, Araz (8 November 2017). "Crowdsourcing: a new tool for policy-making?". Policy Sciences. 50 (4): 629–647. arXiv:1802.03113. doi:10.1007/s11077-017-9303-3. ISSN 0032-2687. S2CID 27696037.
  78. ^ Diamond, Larry; Whittington, Zak (2009). "Social Media". In Welzel, Christian; Haerpfer, Christian W.; Bernhagen, Patrick; Inglehart, Ronald F. (eds.). Democratization (2 ed.). Oxford: Oxford University Press (published 2018). p. 256. ISBN 9780198732280. Retrieved 4 March 2021. Another way that social media can contribute to democratization is by 'crowdsourcing' information. This elicits the knowledge and wisdom of the 'crowd' [...].
  79. ^ Aitamurto, Tanja (2012). Crowdsourcing for Democracy: New Era In Policy–Making. Committee for the Future, Parliament of Finland. pp. 10–30. ISBN 978-951-53-3459-6.
  80. ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Crowdsourcing the Policy Cycle. Collective Intelligence 2014, MIT Center for Collective Intelligence" (PDF). Humancomputation.com. Archived from the original (PDF) on 24 June 2015. Retrieved 2 July 2015.
  81. ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "A Framework for Policy Crowdsourcing. Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (PDF). Ipp.oxii.ox.ac.uk. Retrieved 2 October 2018.
  82. ^ Prpić, J.; Taeihagh, A.; Melton, J. (2014). "Experiments on Crowdsourcing Policy Assessment. Oxford Internet Institute, University of Oxford - IPP 2014 - Crowdsourcing for Politics and Policy" (PDF). Ipp.oii.ox.ac.uk. Retrieved 2 July 2015.
  83. ^ Thapa, B.; Niehaves, B.; Seidel, C.; Plattfaut, R. (2015). "Citizen involvement in public sector innovation: Government and citizen perspectives". Information Polity. 20 (1): 3–17. doi:10.3233/IP-150351.
  84. ^ Aitamurto and Landemore (4 February 2015). "Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law reform in Finland". Journal of Social Media for Organizations (1): 1–19.
  85. ^ a b Aitamurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Profile and Expectations for Participation in Crowdsourced Policymaking". Information, Communication & Society. 20 (8): 1239–1260. doi:10.1080/1369118x.2016.1228993. S2CID 151989757 – via Routledge.((cite journal)): CS1 maint: multiple names: authors list (link)
  86. ^ Aitamurto, Chen, Cherif, Galli and Santana (2016). "Making Sense of Crowdsourced Civic Data with Big Data Tools". ACM Digital Archive: Academic Mindtrek. doi:10.1145/2994310.2994366. S2CID 16855773 – via ACM Digital Archive.((cite journal)): CS1 maint: multiple names: authors list (link)
  87. ^ a b c Aitamurto, Tanja (31 January 2015). Crowdsourcing for Democracy: New Era in Policymaking. Committee for the Future, Parliament of Finland. ISBN 978-951-53-3459-6.
  88. ^ "Home". challenge.gov.
  89. ^ Stan Nussbaum. 2003. Proverbial perspectives on pluralism. Connections: the journal of the WEA Missions Committee October, pp. 30, 31.
  90. ^ "Oromo dictionary project". OromoDictionary.com. Retrieved 3 February 2014.
  91. ^ Albright, Eric; Hatton, John (2007). Chapter 10. WeSay, a Tool for Engaging Native Speakers in Dictionary Building. hdl:10125/1368. ISBN 978-0-8248-3309-1.
  92. ^ "Developing ASL vocabulary for science and math". Washington.edu. 7 December 2012. Retrieved 3 February 2014.
  93. ^ Keuleers; et al. (February 2015). "Word knowledge in the crowd: Measuring vocabulary size and word prevalence in a massive online experiment". Quarterly Journal of Experimental Psychology. 68 (8): 1665–1692. doi:10.1080/17470218.2015.1022560. PMID 25715025. S2CID 4894686.
  94. ^ a b Bill, Jeremiah; Gong, He; Hamilton, Brooke; Hawthorn, Henry; et al. "The extension of (positive) anymore". Google Docs. Retrieved 27 September 2020.
  95. ^ "Pashto Proverb Collection project". AfghanProverbs.com. Archived from the original on 4 February 2014. Retrieved 3 February 2014.
  96. ^ "Comparing methods of collecting proverbs" (PDF). gial.edu.
  97. ^ Edward Zellem. 2014. Mataluna: 151 Afghan Pashto Proverbs. Tampa, FL: Culture Direct.
  98. ^ Zhai, Haijun; Lingren, Todd; Deleger, Louise; Li, Qi; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre (2013). "Web 2.0-based crowdsourcing for high-quality gold standard development in clinical Natural Language Processing". Journal of Medical Internet Research. 15 (4): e73. doi:10.2196/jmir.2426. PMC 3636329. PMID 23548263.
  99. ^ Martin, Fred; Resnick, Mitchel (1993), "LEGO/Logo and Electronic Bricks: Creating a Scienceland for Children", Advanced Educational Technologies for Mathematics and Science, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 61–89, doi:10.1007/978-3-662-02938-1_2, ISBN 978-3-642-08152-1, retrieved 26 July 2022
  100. ^ Reinhold, Stephan; Dolnicar, Sara (December 2017), "How Airbnb Creates Value", Peer-to-Peer Accommodation Networks, Goodfellow Publishers, doi:10.23912/9781911396512-3602, ISBN 9781911396512, retrieved 26 July 2022
  101. ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (November 2013). "The role of VGI and PGI in supporting outdoor activities". Applied Ergonomics. 44 (6): 886–894. doi:10.1016/j.apergo.2012.04.013. ISSN 0003-6870. PMID 22795180. S2CID 12918341.
  102. ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (15 May 2014). "User-centred design of neogeography: the impact of volunteered geographic information on users' perceptions of online map 'mashups'". Ergonomics. 57 (7): 987–997. doi:10.1080/00140139.2014.909950. ISSN 0014-0139. PMID 24827070. S2CID 13458260.
  103. ^ Brown, Michael; Sharples, Sarah; Harding, Jenny; Parker, Christopher J. (November 2013). "Usability of Geographic Information: Current challenges and future directions" (PDF). Applied Ergonomics. 44 (6): 855–865. doi:10.1016/j.apergo.2012.10.013. PMID 23177775. S2CID 26412254. Archived from the original (PDF) on 19 July 2018. Retrieved 20 August 2019.
  104. ^ Parker, Christopher J.; May, Andrew; Mitchell, Val (August 2012). "Understanding Design with VGI using an Information Relevance Framework". Transactions in GIS. 16 (4): 545–560. doi:10.1111/j.1467-9671.2012.01302.x. ISSN 1361-1682. S2CID 20100267.
  105. ^ Holley, Rose (March 2010). "Crowdsourcing: How and Why Should Libraries Do It?". D-Lib Magazine. 16 (3/4). doi:10.1045/march2010-holley. Retrieved 21 May 2021.
  106. ^ Trant, Jennifer (2009). Tagging, Folksonomy and Art Museums: Results of steve.museum's research (PDF). Archives & Museum Informatics. Archived from the original (PDF) on 10 February 2010. Retrieved 21 May 2021.
  107. ^ Andro, M. (2018). Digital libraries and crowdsourcing, Wiley / ISTE. ISBN 9781786301611.
  108. ^ Rahman, Mahbubur; Blackwell, Brenna; Banerjee, Nilanjan; Dharmendra, Saraswat (2015), "Smartphone-based hierarchical crowdsourcing for weed identification", Computers and Electronics in Agriculture, 113: 14–23, doi:10.1016/j.compag.2014.12.012, retrieved 12 August 2015
  109. ^ Primarily on the Bridge Winners website
  110. ^ Tang, Weiming; Han, Larry; Best, John; Zhang, Ye; Mollan, Katie; Kim, Julie; Liu, Fengying; Hudgens, Michael; Bayus, Barry (1 June 2016). "Crowdsourcing HIV Test Promotion Videos: A Noninferiority Randomized Controlled Trial in China". Clinical Infectious Diseases. 62 (11): 1436–1442. doi:10.1093/cid/ciw171. ISSN 1537-6591. PMC 4872295. PMID 27129465.
  111. ^ a b Zhang, Ye; Kim, Julie A.; Liu, Fengying; Tso, Lai Sze; Tang, Weiming; Wei, Chongyi; Bayus, Barry L.; Tucker, Joseph D. (November 2015). "Creative Contributory Contests to Spur Innovation in Sexual Health: 2 Cases and a Guide for Implementation". Sexually Transmitted Diseases. 42 (11): 625–628. doi:10.1097/OLQ.0000000000000349. ISSN 1537-4521. PMC 4610177. PMID 26462186.
  112. ^ Créquit, Perrine (2018). "Mapping of Crowdsourcing in Health: Systematic Review". Journal of Medical Internet Research. 20 (5): e187. doi:10.2196/jmir.9330. PMC 5974463. PMID 29764795.
  113. ^ van der Krieke; et al. (2015). "HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths" (PDF). International Journal of Methods in Psychiatric Research. 25 (2): 123–144. doi:10.1002/mpr.1495. PMC 6877205. PMID 26395198.
  114. ^ Prpić, J. (2015). "Health Care Crowds: Collective Intelligence in Public Health. Collective Intelligence 2015. Center for the Study of Complex Systems, University of Michigan". Papers.ssrn.com. SSRN 2570593. ((cite journal)): Cite journal requires |journal= (help)
  115. ^ a b van der Krieke, L; Blaauw, FJ; Emerencia, AC; Schenk, HM; Slaets, JP; Bos, EH; de Jonge, P; Jeronimus, BF (2016). "Temporal Dynamics of Health and Well-Being: A Crowdsourcing Approach to Momentary Assessments and Automated Generation of Personalized Feedback (2016)" (PDF). Psychosomatic Medicine. 79 (2): 213–223. doi:10.1097/PSY.0000000000000378. PMID 27551988. S2CID 10955232.
  116. ^ Ess, Henk van "Crowdsourcing: how to find a crowd", ARD ZDF Akademie 2010, Berlin, p. 99,
  117. ^ a b c Doan, A.; Ramarkrishnan, R.; Halevy, A. (2011), "Crowdsourcing Systems on the World Wide Web" (PDF), Communications of the ACM, 54 (4): 86–96, doi:10.1145/1924421.1924442, S2CID 207184672
  118. ^ Brabham, Daren C. (2013), Crowdsourcing, MIT Press, p. 45
  119. ^ Blohm, Ivo; Zogaj, Shkodran; Bretschneider, Ulrich; Leimeister, Jan Marco (2018). "How to Manage Crowdsourcing Platforms Effectively" (PDF). California Management Review. 60 (2): 122–149. doi:10.1177/0008125617738255. S2CID 73551209.
  120. ^ Howe, Jeff (2008), Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business (PDF), The International Achievement Institute, archived from the original (PDF) on 23 September 2015, retrieved 9 April 2012
  121. ^ "Crowdvoting: How Elo Limits Disruption". thevisionlab.com. 25 May 2017.
  122. ^ Robson, John (24 February 2012). "IEM Demonstrates the Political Wisdom of Crowds". Canoe.ca. Archived from the original on 7 April 2012. Retrieved 31 March 2012.
  123. ^ "4 Great Examples of Crowdsourcing through Social Media". digitalagencymarketing.com. 2012. Archived from the original on 1 April 2012. Retrieved 29 March 2012.
  124. ^ Goldberg, Ken; Newsom, Gavin (12 June 2014). "Let's amplify California's collective intelligence". Citris-uc.org. Retrieved 14 June 2014.
  125. ^ Escoffier, N. and B. McKelvey (2014). "Using "Crowd-Wisdom Strategy" to Co-Create Market Value: Proof-of-Concept from the Movie Industry." in International Perspective on Business Innovation and Disruption in the Creative Industries: Film, Video, Photography, P. Wikstrom and R. DeFillippi, eds., UK: Edward Elgar Publishing Ltd, Chap. 11.
  126. ^ Block, A. B. (21 April 2010). "How boxoffice trading could flop". The Hollywood Reporter.
  127. ^ Chen, A. and R. Panaligan (2013). "Quantifying movie magic with Google search." Google White Paper, Industry Perspectives+User Insights
  128. ^ Williams, Jack (17 February 2017). "An Indoor Football Team Has Its Fans Call the Plays". The New York Times. ISSN 0362-4331. Retrieved 7 February 2018.
  129. ^ Prive, Tanya. "What Is Crowdfunding And How Does It Benefit The Economy". Forbes.com. Retrieved 2 July 2015.
  130. ^ Choy, Katherine; Schlagwein, Daniel (2016), "Crowdsourcing for a better world: On the relation between IT affordances and donor motivations in charitable crowdfunding", Information Technology & People, 29 (1): 221–247, doi:10.1108/ITP-09-2014-0215
  131. ^ Barnett, Chance. "Crowdfunding Sites In 2014". Forbes.com. Retrieved 2 July 2015.
  132. ^ a b c Agrawal, Ajay, Christian Catalini, and Avi Goldfarb. "Some Simple Economics of Crowdfunding." National Bureau of Economic Research (2014): 63-97.
  133. ^ Leimeister, J.M.; Huber, M.; Bretschneider, U.; Krcmar, H. (2009), "Leveraging Crowdsourcing: Activation-Supporting Components for IT-Based Ideas Competition", Journal of Management Information Systems, 26 (1): 197–224, doi:10.2753/mis0742-1222260108, S2CID 17485373
  134. ^ Ebner, W.; Leimeister, J.; Krcmar, H. (2009), "Community Engineering for Innovations: The Ideas Competition as a method to nurture a Virtual Community for Innovations", R&D Management, 39 (4): 342–356, doi:10.1111/j.1467-9310.2009.00564.x, S2CID 16316321[dead link]
  135. ^ "DARPA Network Challenge". DARPA Network Challenge. Archived from the original on 11 August 2011. Retrieved 28 November 2011.
  136. ^ "Social media web snares 'criminals'". New Scientist. Retrieved 4 April 2012.
  137. ^ "Beyond XPrize: The 10 Best Crowdsourcing Tools and Technologies". 20 February 2012. Retrieved 30 March 2012.
  138. ^ Cunard, C. (2010). "The Movie Research Experience gets audiences involved in filmmaking." The Daily Bruin, (19 July)
  139. ^ MacArthur, Kate. "Squadhelp wants your company to crowdsource better names (and avoid Boaty McBoatface)". chicagotribune.com. Retrieved 28 August 2017.
  140. ^ "Compete To Create Your Dream Home". FastCoexist.com. 4 June 2013. Retrieved 3 February 2014.
  141. ^ "Designers, clients forge ties on web". Boston Herald. 11 June 2012. Retrieved 3 February 2014.
  142. ^ Dolan, Shelagh, "Crowdsourced delivery explained: making same day shipping cheaper through local couriers.", Business Insider, archived from the original on 22 May 2018, retrieved 21 May 2018
  143. ^ Murison, Malek (19 April 2018), "LivingPackets uses IoT, crowdshipping to transform deliveries", Internet of Business, retrieved 19 April 2018
  144. ^ Biller, David; Sciaudone, Christina (19 June 2018), "Goldman Sachs, Soros Bet on the Uber of Brazilian Trucking", Bloomberg, retrieved 11 March 2019
  145. ^ Tyrsina, Radu, "Parcl Uses Trusted Forwarders to Bring you Products that don't Ship to your Country", Tehcnology Personalised, archived from the original on 3 October 2015, retrieved 1 October 2015
  146. ^ Geiger D, Rosemann M, Fielt E. Crowdsourcing information systems: a systems theory perspective. InProceedings of the 22nd Australasian Conference on Information Systems (ACIS 2011) 2011.
  147. ^ D, Powell (2015). "A new tool for crowdsourcing". МИР (Модернизация. Инновации. Развитие). 6 (2-2 (22)). ISSN 2079-4665.
  148. ^ Yang, J.; Adamic, L.; Ackerman, M. (2008), "Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn" (PDF), Proceedings of the 9th ACM Conference on Electronic Commerce, doi:10.1145/1386790.1386829, S2CID 15553154, archived from the original (PDF) on 29 July 2020, retrieved 28 February 2012
  149. ^ "Mobile Crowdsourcing". Clickworker. Retrieved 10 December 2014.
  150. ^ Thebault-Spieker, Terveen, & Hecht. Avoiding the South Side and the Suburbs: The Geography of Mobile Crowdsourcing Markets.((cite book)): CS1 maint: multiple names: authors list (link)
  151. ^ Chatzimiloudis, Konstantinidis & Laoudias, Zeinalipour-Yazti. "Crowdsourcing with smartphones" (PDF). ((cite magazine)): Cite magazine requires |magazine= (help)
  152. ^ Arkian, Hamid Reza; Diyanat, Abolfazl; Pourkhalili, Atefe (2017). "MIST: Fog-based data analytics scheme with cost-efficient resource provisioning for IoT crowdsensing applications". Journal of Network and Computer Applications. 82: 152–165. doi:10.1016/j.jnca.2017.01.012.
  153. ^ Felstiner, Alek (August 2011). "Working the Crowd: Employment and Labor Law in the Crowdsourcing Industry" (PDF). Berkeley Journal of Employment & Labor Law. 32: 150–151 – via WTF.
  154. ^ "View of Crowdsourcing: Libertarian Panacea or Regulatory Nightmare?". online-shc.com. Retrieved 26 May 2017.[permanent dead link]
  155. ^ a b Ross, J.; Irani, L.; Silberman, M.S.; Zaldivar, A.; Tomlinson, B. (2010). "Who are the Crowdworkers? Shifting Demographics in Mechanical Turk" (PDF). Chi 2010. Archived from the original (PDF) on 1 April 2011. Retrieved 28 February 2012.
  156. ^ Hirth, M.; Hoßfeld, T.; Train-Gia, P. (2011), Human Cloud as Emerging Internet Application – Anatomy of the Microworkers Crowdsourcing Platform (PDF)
  157. ^ a b c Brabham, Daren C. (2008). "Moving the Crowd at iStockphoto: The Composition of the Crowd and Motivations for Participation in a Crowdsourcing Application". First Monday. 13 (6). doi:10.5210/fm.v13i6.2159. Archived from the original on 24 November 2012. Retrieved 27 June 2012.
  158. ^ a b c Lakhani; et al. (2007). "The Value of Openness in Scientific Problem Solving" (PDF). Retrieved 26 February 2012. ((cite journal)): Cite journal requires |journal= (help)
  159. ^ Brabham, Daren C. (2012). "Managing Unexpected Publics Online: The Challenge of Targeting Specific Groups with the Wide-Reaching Tool of the Internet". International Journal of Communication. 6: 20.
  160. ^ a b Brabham, Daren C. (2010). "Moving the Crowd at Threadless: Motivations for Participation in a Crowdsourcing Application". Information, Communication & Society. 13 (8): 1122–1145. doi:10.1080/13691181003624090. S2CID 143402410.
  161. ^ a b Brabham, Daren C. (2012). "The Myth of Amateur Crowds: A Critical Discourse Analysis of Crowdsourcing Coverage". Information, Communication & Society. 15 (3): 394–410. doi:10.1080/1369118X.2011.641991. S2CID 145675154.
  162. ^ Saxton, Oh, & Kishore (2013). "Rules of Crowdsourcing: Models, Issues, and Systems of Control". Information Systems Management. 30: 2–20. CiteSeerX 10.1.1.300.8026. doi:10.1080/10580530.2013.739883. S2CID 16811686.
  163. ^ a b Aitamurto, Tanja (2015). "Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning". International Journal of Communication. 9: 3523–3543.
  164. ^ a b Kaufmann, N.; Schulze, T.; Viet, D. (2011). "More than fun and money. Worker Motivation in Crowdsourcing – A Study on Mechanical Turk" (PDF). Proceedings of the Seventeenth Americas Conference on Information Systems. Archived from the original (PDF) on 27 February 2012.
  165. ^ Brabham, Daren C. (2012). "Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning". Journal of Applied Communication Research. 40 (3): 307–328. doi:10.1080/00909882.2012.693940. S2CID 144807388.
  166. ^ Lietsala, Katri; Joutsen, Atte (2007). "Hang-a-rounds and True Believers: A Case Analysis of the Roles and Motivational Factors of the Star Wreck Fans". MindTrek 2007 Conference Proceedings.
  167. ^ "State of the World's Volunteerism Report 2011" (PDF). Unv.org. Archived from the original (PDF) on 2 December 2014. Retrieved 1 July 2015.
  168. ^ Chandler, D.; Kapelner, A. (2010). "Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets" (PDF). Journal of Economic Behavior & Organization. 90: 123–133. arXiv:1210.0962. doi:10.1016/j.jebo.2013.03.003. S2CID 8563262.
  169. ^ Aparicio, M.; Costa, C.; Braga, A. (2012). Proposing a system to support crowdsourcing (PDF). OSDOC '12 Proceedings of the Workshop on Open Source and Design of Communication. pp. 13–17. doi:10.1145/2316936.2316940. ISBN 9781450315258. S2CID 16494503.
  170. ^ Aitamurto, Landemore, Galli (2016). "Unmasking the Crowd: Participants' Motivation Factors, Expectations, and Profile in a Crowdsourced Law Reform". Information, Communication & Society.((cite journal)): CS1 maint: multiple names: authors list (link)
  171. ^ Quinn, Alexander J.; Bederson, Benjamin B. (2011). "Human Computation:A Survey and Taxonomy of a Growing Field, CHI 2011 [Computer Human Interaction conference], May 7–12, 2011, Vancouver, BC, Canada" (PDF). Retrieved 30 June 2015.
  172. ^ Prpić, J; Shukla, P.; Roth, Y.; Lemoine, J.F. (2015). "A Geography of Participation in IT-Mediated Crowds". Proceedings of the Hawaii International Conference on Systems Sciences 2015. SSRN 2494537.
  173. ^ a b Borst, Irma. "The Case For and Against Crowdsourcing: Part 2". Archived from the original on 12 September 2015. Retrieved 9 February 2015.
  174. ^ Ipeirotis; Provost; Wang (2010). "Quality Management on Amazon Mechanical Turk" (PDF). Archived from the original (PDF) on 9 August 2012. Retrieved 28 February 2012. ((cite journal)): Cite journal requires |journal= (help)
  175. ^ Lukyanenko, Roman; Parsons, Jeffrey; Wiersma, Yolanda (2014). "The IQ of the Crowd: Understanding and Improving Information Quality in Structured User-Generated Content". Information Systems Research. 25 (4): 669–689. doi:10.1287/isre.2014.0537.
  176. ^ Goerzen, Thomas; Kundisch, Dennis (11 August 2016). "Can the Crowd Substitute Experts in Evaluation of Creative Ideas? An Experimental Study Using Business Models". AMCIS 2016 Proceedings.
  177. ^ Burnap, Alex; Ren, Alex J.; Papazoglou, Giannis; Gerth, Richard; Gonzalez, Richard; Papalambros, Panos. "When Crowdsourcing Fails: A Study of Expertise on Crowdsourced Design Evaluation" (PDF). Archived from the original (PDF) on 29 October 2015. Retrieved 19 May 2015. ((cite journal)): Cite journal requires |journal= (help)
  178. ^ Kurve, Aditya; Miller, David J.; Kesidis, George (30 May 2014). "Multicategory Crowdsourcing Accounting for Variable Task Difficulty, Worker Skill, and Worker Intention". IEEE Kde (99).
  179. ^ Hirth; Hoßfeld; Tran-Gia (2011), Human Cloud as Emerging Internet Application - Anatomy of the Microworkers Crowdsourcing Platform (PDF)
  180. ^ Ipeirotis, Panagiotis G. (2010). "Analyzing the Amazon Mechanical Turk Marketplace" (PDF). XRDS: Crossroads, the ACM Magazine for Students. 17 (2): 16–21. doi:10.1145/1869086.1869094. S2CID 6472586. SSRN 1688194. Retrieved 2 October 2018.
  181. ^ a b Hosaka, Tomoko A. (April 2008). "Facebook asks users to translate for free". NBC News.
  182. ^ Britt, Darice. "Crowdsourcing: The Debate Roars On". Archived from the original on 1 July 2014. Retrieved 4 December 2012.
  183. ^ Woods, Dan (28 September 2009). "The Myth of Crowdsourcing". Forbes. Retrieved 4 December 2012.
  184. ^ a b Aitamurto, Tanja; Leiponen, Aija (1 January 1970). "The Promise of Idea Crowdsourcing: Benefits, Contexts, Limitations | Tanja Aitamurto". Ideasproject.com. Retrieved 2 July 2015.
  185. ^ "International Translators Association Launched in Argentina". Latin American Herald Tribune. Archived from the original on 11 March 2021. Retrieved 23 November 2016.
  186. ^ Kleeman, Frank (2008). "Un(der)paid Innovators: The Commercial Utilization of Consumer Work through Crowdsourcing". Sti-studies.de. Retrieved 2 July 2015.
  187. ^ Jason (2011). "Crowdsourcing: A Million Heads is Better Than One". Crowdsourcing.org. Archived from the original on 3 July 2015. Retrieved 2 July 2015.
  188. ^ Dupree, Steven (2014). "Crowdfunding 101: Pros and Cons". Gsb.stanford.edu. Retrieved 2 July 2015.
  189. ^ "Fair Labor Standards Act Advisor". Retrieved 28 February 2012.
  190. ^ Hara, Kotaro; Adams, Abigail; Milland, Kristy; Savage, Saiph; Callison-Burch, Chris; Bigham, Jeffrey P. (21 April 2018). "A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk". Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM: 1–14. doi:10.1145/3173574.3174023. ISBN 9781450356206. S2CID 5040507.
  191. ^ Greg Norcie, 2011, "Ethical and practical considerations for compensation of crowdsourced research participants," CHI WS on Ethics Logs and VideoTape: Ethics in Large Scale Trials & User Generated Content, [1] Archived 2012-06-30 at the Wayback Machine, accessed 30 June 2015.
  192. ^ Busarovs, Aleksejs (2013). "Ethical Aspects of Crowdsourcing, or is it a Modern Form of Exploitation" (PDF). International Journal of Economics & Business Administration. 1 (1): 3–14. doi:10.35808/ijeba/1. Retrieved 26 November 2014.
  193. ^ Paolacci, G; Chandler, J; Ipeirotis, P.G. (2010). "Running experiments on Amazon Mechanical Turk". Judgment and Decision Making. 5 (5): 411–419. hdl:1765/31983.
  194. ^ Graham, Mark; Hjorth, Isis; Lehdonvirta, Vili (1 May 2017). "Digital labour and development: impacts of global digital labour platforms and the gig economy on worker livelihoods". Transfer: European Review of Labour and Research. 23 (2): 135–162. doi:10.1177/1024258916687250. ISSN 1024-2589. PMC 5518998. PMID 28781494.
  195. ^ The Crowdsourcing Scam (Dec. 2014), The Baffler, No. 26
  196. ^ Salehi; et al. (2015). "We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers" (PDF). Retrieved 16 June 2015. ((cite journal)): Cite journal requires |journal= (help)
  197. ^ Irani, Lilly C.; Silberman, M. Six (27 April 2013). "Turkopticon". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM: 611–620. doi:10.1145/2470654.2470742. ISBN 9781450318990. S2CID 207203679.