Fundamental to the success of what, for the lack of a better term, we are calling "robot journalism", is the question whether audiences can tell the difference between automated content and human content. And does it matter?
Professor Neil Thurman of the Institute of Media Research at the University of Munich, said robot journalism was a problematic term but a convenient one for the range of the technologies used in news creation and distribution.
Three major steps in the creation of content included news detection and verification, content creation, and news distribution.
Automation is available for all three areas but his research focussed on text automation and was being used by news organisations such as AP and Reuters for business and sports stories.
For the text automation to work, data sets were essential and so were the templates developed by news organisations to ensure resulting stories met their needs.
Templates are built by working out what is important, the synonyms to use so that the same words were not repeated over and over, and what he called "branches" or data sets.
The University of Munich research included seeking journalists' opinions of the resulting automated stories and staffers from CNN, BBC, Trinity Mirror, The Sun newspaper and Thomson Reuters were surveyed
Overall, the responses was positive, he says, and rather than seeing automation as putting them out of a job, journalists thought that it would ease the burden of doing stories from council meetings, junior sports, and crime figures, allowing journalists to add value, complexity, personalisation, creativity and readability.
Thurman's research included seeking audience reaction in a range of studies, mostly by asking readers if they could distinguish between computer generated copy and human copy, concluding there was little difference in perception, all depending on what the audience was told in bylines.
If the byline said the story was written by a named person, the audience widely considered the story was better, more reliable and more creative. If the audience was given plenty of detail in how the story was put together by template, readers were less impressed with it.
Swedish, Dutch and German studies found there was little difference in audience perception but Korean readers rated the computer generated copy much higher - "a cultural difference, as Koreans have very little trust in their media", Thurman said.
So rather than robot journalism, a better description should be "cobot" journalism, working in cooperation with journalists to add creativity and complexity to the output.
Thurman said one problem with automated stories was the adding of information that is unexpected - as in a story on a football match when the results were not so much the story, as all the action which happened off the pitch, and this would not be included in the story if the template only focussed on the actual results.
A crucial question was whether readers would find the output interesting enough, and whether technology could make reporting more objective and less biased.
Meanwhile Melanie Rossmann (pictured) from the Ludwig-Maximilians-Universitat Munich found that the effectiveness of the automation all depended on the data provided.
Benefits of automation were that it was fast and cheap, producing real-time news, creating more reports with greater accuracy and fewer errors in grammar and spelling, with objectivity and personalisation.
Automation does have limitations however, including the availability of high quality data, i's interpretation (asking how but not why) and the quality of the writing, and thus audience perception.
Perhaps the next step in automation is to write the story in a sympathetic tone for local losers and a more enthusiastic tone for a story on local winners of a sports match?
Rossmann says her conclusion is that algorithms won't replace journalism but they will extend it, "so let's be enthusiastic about the future".
Steven Morell, sales manager of automation company Ax Semantics, shared his conviction that content was still king and that computer-written text was indistinguishable from that penned by journalists.
He tested the audience, posting three sets of stories and challenging them to pick which was which - confiding that he could not tell either, and that he had added a colon in the headline of the computer pieces so he could be sure of the answers.
Ax Semantics works on product data, compliance data and journalism "doing the work that no-one wants to do", an example being comparing figures on ranges of tyres and other scaleable database in table format.
"What makes us human is the ability to tell stories," he says. The company now produces 27.2 million stories a month, up from 16 million a month late last year, with usage increasing dramatically.