Ask HN: How will NLP models affect online discussion?
2 by daenz | 0 comments on Hacker News.
With the recent public advances in NLP[0], I can't help but wonder if/when it will become a common suspicion that people are arguing with advanced bots. It doesn't take an expert to see how these sophisticated language models could be used to broadly target narratives online with counter-narratives through sock puppet accounts. Imagine being in control of a fleet of a million social media bots, all with distinct ways of writing and wording their arguments, all with the capacity to engage content that passes through a sentiment analysis and classification filter. A million social media bots, which could all push, say, pro-life, or anti-gun narratives, generating massive amounts of coherent text on the subject. They can engage with dissenters automatically, endlessly, with a demoralizing amount of content, using multiple accounts to reinforce eachother. Do you foresee some form of this happening? What do you think will be the outcome? If it happens, will people be aware of it, or will it be labeled a conspiracy theory? Have state actors been doing this for awhile now? How will people regain trust in one another if there is a looming suspicion that you are not engaging with someone genuine? Are services where conversation can take place prepared to verify and ban non-humans users on a massive scale? 0. https://ift.tt/36FzDY4