ImageVerifierCode 换一换
格式:PDF , 页数:3 ,大小:250.40KB ,
资源ID:3616029      下载积分:2 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.wnwk.com/docdown/3616029.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(nature:ChatGPT在研究领域应用的五个潜在重点 -ChatGPT five priorities for research.pdf)为本站会员(a****2)主动上传,蜗牛文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知蜗牛文库(发送邮件至admin@wnwk.com或直接QQ联系客服),我们立即给予删除!

nature:ChatGPT在研究领域应用的五个潜在重点 -ChatGPT five priorities for research.pdf

1、ChatGPT:five priorities for research Eva A.M.van Dis,Johan Bollen,Robert van Rooij,Willem Zuidema&Claudi L.BocktingConversational AI is a game-changer for science.Heres how to respond.A chatbot called ChatGPT can help to write text for essays,scientific abstracts and more.Since a chatbot called Chat

2、GPT was released late last year,it has become apparent that this type of artificial intelligence(AI)technology will have huge implications on the way in which researchers work.ChatGPT is a large language model(LLM),a machine-learning system that autonomously learns from data and can produce sophisti

3、-cated and seemingly intelligent writing after training on a massive data set of text.It is the latest in a series of such models released by OpenAI,an AI company in San Francisco,California,and by other firms.ChatGPT has perspectives.However,it could also degrade the quality and transparency of res

4、earch and fundamentally alter our autonomy as human researchers.ChatGPT and other LLMs produce text that is convincing,but often wrong,so their use can distort scientific facts and spread misinformation.We think that the use of this technology is inevitable,therefore,banning it will not work.It is i

5、mperative that the research community engage in a debate about the implications of this potentially disruptive technology.Here,we out-line five key issues and suggest where to start.Hold on to human verificationLLMs have been in development for years,but continuous increases in the quality and size

6、of data sets,and sophisticated methods to calibrate these models with human feed-back,have suddenly made them much more powerful than before.LLMs will lead to a new generation of search engines1 that are able to produce detailed and informative answers to complex user questions.But using conversatio

7、nal AI for specialized research is likely to introduce inaccuracies,bias and plagiarism.We presented ChatGPT with a series of questions and assignments that required an in-depth understanding of the literature and found that it often generated false and misleading text.For example,when we asked how

8、many patients with depression experience relapse after treatment?,it gen-erated an overly general text arguing that treatment effects are typically long-lasting.However,numerous high-quality studies show that treatment effects wane and that the risk of relapse ranges from 29%to 51%in the first year

9、after treatment completion24.Repeat-ing the same query generated a more detailed and accurate answer(see Supplementary information,FigsS1 andS2).Next,we asked ChatGPT to summarize a systematic review that two of us authored in JAMA Psychiatry5 on the effectiveness of cognitive behavioural therapy(CB

10、T)for anxiety-related disorders.ChatGPT fabricated a convincing response that contained several factual errors,misrepresentations and wrong data(see Supplementary information,Fig.S3).For example,it said the review was based on 46studies(it was actually based on 69)and,more worryingly,it exaggerated

11、the effective-ness of CBT.Such errors could be due to an absence of the relevant articles in ChatGPTs training set,a failure to distil the relevant information or being unable to distinguish between credible and less-credible sources.It seems that the same biases that often lead humans astray,such a

12、s availability,selection and confirma-tion biases,are reproduced and often even amplified in conversational AI6.Researchers who use ChatGPT risk being misled by false or biased information,and caused excitement and controversy because it is one of the first models that can convincingly converse with

13、 its users in English and other languages on a wide range of topics.It is free,easy to use and continues to learn.This technology has far-reaching conse-quences for science and society.Researchers and others have already used ChatGPT and other large language models to write essays and talks,summariz

14、e literature,draft and improve papers,as well as identify research gaps and write computer code,including sta-tistical analyses.Soon this technology will evolve to the point that it can design exper-iments,write and complete manuscripts,conduct peer review and support editorial decisions to accept o

15、r reject manuscripts.Conversational AI is likely to revolutionize research practices and publishing,creating both opportunities and concerns.It might accelerate the innovation process,shorten time-to-publication and,by helping people to write fluently,make science more equi-table and increase the di

16、versity of scientific VITOR MIRANDA/ALAMY224|Nature|Vol 614|9 February 2023Comment 2023 Springer Nature Limited.All rights reserved.获取更多最新资料请加微信:ch e n s a s a 666incorporating it into their thinking and papers.Inattentive reviewers might be hoodwinked into accepting an AI-written paper by its beaut

17、iful,authoritative prose owing to the halo effect,a tendency to over-generalize from a few salient positive impressions7.And,because this technology typically reproduces text without reliably citing the original sources or authors,researchers using it are at risk of not giving credit to earlier work

18、,unwittingly plagiarizing a multitude of unknown texts and perhaps even giving away their own ideas.Information that researchers reveal to ChatGPT and other LLMs might be incorpo-rated into the model,which the chatbot could serve up to others with no acknowledgement of the original source.Assuming t

19、hat researchers use LLMs in their work,scholars need to remain vigilant.Expert-driven fact-checking and verification processes will be indispensable.Even when LLMs are able to accurately expedite summaries,evaluations and reviews,high-quality journals might decide to include a human verification ste

20、p or even ban certain applications that use this technol-ogy.To prevent human automation bias an over-reliance on automated systems it will become even more crucial to emphasize the importance of accountability8.We think that humans should always remain accountable for scientific practice.Develop ru

21、les for accountabilityTools are already available to predict the likelihood that a text originates from machines or humans.Such tools could be useful for detecting the inevitable use of LLMs to man-ufacture content by paper mills and predatory journals,but such detection methods are likely to be cir

22、cumvented by evolved AI technolo-gies and clever prompts.Rather than engage in a futile arms race between AI chatbots and AI-chatbot-detectors,we think the research community and publishers should work out how to use LLMs with integrity,transparency and honesty.Author-contribution statements and ack

23、nowledgements in research papers should state clearly and specifically whether,and to what extent,the authors used AI technolo-gies such as ChatGPT in the preparation of their manuscript and analysis.They should also indicate which LLMs were used.This will alert editors and reviewers to scrutinize m

24、an-uscripts more carefully for potential biases,inaccuracies and improper source crediting.Likewise,scientific journals should be trans-parent about their use of LLMs,for example when selecting submitted manuscripts.Research institutions,publishers and funders should adopt explicit policies that rai

25、se awareness of,and demand transpar-ency about,the use of conversational AI in the preparation of all materials that might become part of the published record.Publishers could request author certification that such policies were followed.For now,LLMs should not be authors of manuscripts because they

26、 cannot be held accountable for their work.But,it might be increasingly difficult for researchers to pin-point the exact role of LLMs in their studies.In some cases,technologies such as ChatGPT might generate significant portions of a manuscript in response to an authors prompts.In others,the author

27、s might have gone through many cycles of revisions and improvements using the AI as a grammar-or spellchecker,but not have used it to author the text.In the future,LLMs are likely to be incorporated into text processing and editing tools,search engines and programming tools.Therefore they might cont

28、ribute to scientific work without authors necessarily being aware of the nature or mag-nitude of the contributions.This defies todays binary definitions of authorship,plagiarism and sources,in which someone is either an author,or not,and a source has either been used,or not.Policies will have to ada

29、pt,but full transparency will always be key.Inventions devised by AI are already causing a fundamental rethink of patent law9,and lawsuits have been filed over the copyright of code and images that are used to train AI,as well as those generated by AI(see the case of AI-written or-assisted manuscrip

30、ts,the research and legal community will also need to work out who holds the rights to the texts.Is it the individ-ual who wrote the text that the AI system was trained with,the corporations who produced the AI or the scientists who used the system to guide their writing?Again,definitions of authors

31、hip must be considered and defined.Invest in truly open LLMsCurrently,nearly all state-of-the-art conver-sational AI technologies are proprietary products of a small number of big technology companies that have the resources for AI development.OpenAI is funded largely by Microsoft,and other major te

32、ch firms are racing to release similar tools.Given the near-monopolies in search,word processing and information access of a few tech compa-nies,this raises considerable ethical concerns.One of the most immediate issues for the research community is the lack of trans-parency.The underlying training

33、sets and LLMs for ChatGPT and its predecessors are not publicly available,and tech companies might conceal the inner workings of their conversational AIs.This goes against the move towards transparency and open science,and makes it hard to uncover the origin of,or gaps in,chatbots knowledge10.For ex

34、ample,we prompted ChatGPT to explain the work of several researchers.In some instances,it produced detailed accounts of scientists who could be considered less influential on the basis of their h-index(a way of measuring the impact of their work).Although it succeeded for a group of researchers with

35、 an h-index of around 20,it failed to generate any informa-tion at all on the work of several highly cited and renowned scientists even those with an h-index of more than 80.To counter this opacity,the development and implementation of open-source AI tech-nology should be prioritized.Non-commercial

36、organizations such as universities typically lack the computational and financial resources needed to keep up with the rapid pace of LLM development.We therefore advocate that scientific-funding organizations,universities,non-governmental organizations(NGOs),gov-ernment research facilities and organ

37、izations such as the United Nations as well tech giants make considerable investments in inde-pendent non-profit projects.This will help to develop advanced open-source,transparent and democratically controlled AI technologies.Critics might say that such collaborations will be unable to rival big te

38、ch,but at least one mainly academic collaboration,BigScience,has already built an open-source language model,called BLOOM.Tech companies might benefit from such a program by open sourcing relevant parts of their models and corpora in the hope of creating greater community involvement,facilitating in

39、novation and reliability.Academic publishers should ensure LLMs have access to their full archives so that the models produce results that are accurate and comprehensive.Embrace the benefits of AIAs the workload and competition in academia increases,so does the pressure to use conver-sational AI.Cha

40、tbots provide opportunities to complete tasks quickly,from PhD stu-dents striving to finalize their dissertation to researchers needing a quick literature review for their grant proposal,or peer-reviewers under time pressure to submit their analysis.If AI chatbots can help with these tasks,results c

41、an be published faster,freeing aca-demics up to focus on new experimental designs.This could significantly accelerate innovation and potentially lead to break-throughs across many disciplines.We think this technology has enormous potential,provided that the current teething problems related to bias,

42、provenance and inaccuracies are ironed out.It is important to examine and advance the validity and reliability of LLMs so that researchers know how to use the technol-ogy judiciously for specific research practices.“One of the most immediate issues for the research community is the lack of transpare

43、ncy.”Nature|Vol 614|9 February 2023|225 2023 Springer Nature Limited.All rights reserved.获取更多最新资料请加微信:ch e n s a s a 666Some argue that because chatbots merely learn statistical associations between words in their training set,rather than understand their meanings,LLMs will only ever be able to reca

44、ll and synthesize what people have already done and not exhibit human aspects of the scientific process,such as creative and conceptual thought.We argue that this is a pre-mature assumption,and that future AI-tools might be able to master aspects of the scien-tific process that seem out of reach tod

45、ay.In a 1991 seminal paper,researchers wrote that“intelligent partnerships”between people and intelligent technology can outperform the intellectual ability of people alone11.These intelligent partnerships could exceed human abilities and accelerate innovation to previ-ously unthinkable levels.The q

46、uestion is how far can and should automation go?AI technology might rebalance the academic skill set.On the one hand,AI could optimize academic training for example,by provid-ing feedback to improve student writing and reasoning skills.On the other hand,it might reduce the need for certain skills,su

47、ch as the ability to perform a literature search.It might also introduce new skills,such as prompt engi-neering(the process of designing and crafting the text that is used to prompt conversational AI models).The loss of certain skills might not necessarily be problematic(for example,most researchers

48、 do not perform statistical analyses by hand any more),but as a community we need to carefully consider which academic skills and characteristics remain essential to researchers.If we care only about performance,peoples contributions might become more limited and obscure as AI technology advances.In

49、 the future,AI chatbots might generate hypotheses,develop methodology,create experiments12,analyse and interpret data and write manu-scripts.In place of human editors and reviewers,AI chatbots could evaluate and review the arti-cles,too.Although we are still some way from this scenario,there is no d

50、oubt that conversa-tional AI technology will increasingly affect all stages of the scientific publishing process.Therefore,it is imperative that scholars,including ethicists,debate the trade-off between the use of AI creating a potential acceleration in knowledge generation and the loss of human pot

copyright@ 2008-2023 wnwk.com网站版权所有

经营许可证编号:浙ICP备2024059924号-2