2025-01-04 手机 0
从各家披露的GPT落地方向来看,大部分都是面向安全事件响应、漏洞挖掘、风险研判等场景。不过,在与业内人士交流的过程中,钛媒体App发现,虽然落地场景有重合,但从实现的技术路径上,各家表现出了些许差异。
OpenAI的ChatGPT以及随后迭代出的GPT4.0,包括当前百度推出的文心一言等都属于通用大模型,都没有明显的行业属性,在一般场景下都能应对自如,比如客服、文生图等等。但遇到特别垂类且对答案容错率较低的行业来说,由于缺乏专业知识,这类通用大模型会表现出可预见的劣势。
所以当网络安全行业在使用GPT的时候,并不能像其他行业一样直接接入已经训练好的GPT4.0,而是需要重新构建一个用网络安全领域的专业知识训练出的大模型,然后再将其应用到实践中。
不过,在与业内人士交流的过程中,钛媒体App发现,同样是训练网络安全领域的大模型,各家所选择的技术路线也有所差异:他们有的是先有安全知识图谱,然后在类ChatGPT的大模型基础上加工;有的是没有类ChatGPT的大模型,而是直接用安全知识图谱训练成一个大模型;也有的可能并没有强调知识图谱,而是用所有数据直接训练。
“过往实践过程中,我们积累了大量数据,这些数据可能包含了安全日志、系统日志,威胁情报生产和分析过程数据,开源情报和安全技术报告、APT报告等等,这些数据通过AI智能化,形成一系列实战化攻防模型以及安全知识图谱。”绿盟科技CTO叶晓虎表示。
在有了实战化攻防模型以及安全知识图谱后,绿盟科技利用类ChatGPT的大语言模型对这些知识做进一步加工,不仅提高了工作效率,还增强了决策质量。这种方式让人类专注于更复杂的问题,同时保证任务执行准确无误。这也是为什么现在越来越多的人开始关注这个问题,因为它关系到每个人的未来命运。
“我们所有下游任务都只基于ChatCS这一个大模式。在训练和应用ChatCS之前,我们先花费精力做网络.security域中的通用.knowledge 图表,然后利用.knowledge 图表生成异构.data 集进行.train.network.security域中的.giant.model.”四维创智项目负责人陈平表示,他们推出的chatcs是一个使用rlhf技术微调并利用.knowledge 图表进行.field知.deduction 的垂直.domain.giant.model思路首先构建以漏洞概念为核心security域中的kknowledge.graph—vuln_sprocket,再将graph生成先验kknowledge集.train.model.
目前,其余几家network security厂商还未完全披露train Gpt 的路径细节,但据industry insiders 分析,360集团’s network security Gpt 应该是在通use giant model 为底座然后加入network security相关.data 调优路径,与green technology 和 four-dimensional wisdom 则很不同.
然而殊途同归。Green Technology CTO 叶晓虎称, intelligent safety customer service robot reached the effect is that application can in the safety event emergency response disposal, massive log analysis research, safety intelligent inference and decision-making, safety field code writing etc. aspects to play a positive role. Other manufacturers' Gpt practice also with this multiple overlap again is a long run worth paying attention to since Chatgpt explosion fire has been over 100 days only time short three or four months network security companies able quickly start Chatgpt with its back already accumulated related data set have very big relationship but need to think about one question: if network security industry urgently need also can have their own giant model why after Openai fire up then network security circle just pay attention to the big model will bring changes?
The reason may be that for large models technology is not fundamental issue problem on one hand in high-quality safe data corpus another hand it is keep artificial intelligence belief and continuous training large models mindset and thinking.
"Before only small-scale try out Network Security AI automation things once or twice not achieve expected results then give up But Chatgpt tell us this route works everyone dare to put in." A Network Security entrepreneur said
Besides successful experimentation large models appearance tells Network Security industry another signal that large models indeed possible cause of against way bottom logic change "If common giant models could make smart reasoning and smart decision apply this road walk through actuality can let against way from past experience-based become paradigmatic work previously experience exist human brain cannot reuse but if go through this side many changes." Leaf Xiao Hu said
It's clear that Openai's Chatgpt simply an opening waiting for China entrepreneurs flocking into trial error This again a long run
下一篇:中国摄协镜头下的协同共鸣