본문 바로가기
카테고리 없음

기업 genAI 정책을 만드는 이유와 방법

by EasyGPT 2023. 8. 22.
반응형

Why and how to create corporate genAI policies
기업 genAI 정책을 만드는 이유와 방법

 

생성 AI 채택은 엄청난 속도로 진행되고 있지만, 이 기술로 인한 잠재적 위협으로 인해 조직은 중요한 데이터와 고객 개인정보를 보호하고 규제기관을 위반하지 않도록 가드레일을 설정해야 합니다.
Adoption of generative AI is happening at a breakneck pace, but potential threats posed by the technology will require organizations to set up guardrails to protect sensitive data and customer privacy — and to avoid running afoul of regulators.

By Lucas Mearian

Senior Reporter, Computerworld | AUG 21, 2023 3:00 AM PDT

많은 수의 기업이 생성 인공 지능(genAI) 도구를 계속 테스트하고 배포함에 따라 많은 기업이 민감한 데이터의 잠재적 노출 가능성은 말할 것도 없고 AI 오류, 악의적 공격, 규제 기관 위반의 위험에 처해 있습니다.

예를 들어, 지난 4월 삼성 반도체 사업부가 엔지니어들에게 ChatGPT 사용을 허용한 후, 게시된 계정에 따르면 플랫폼을 사용하는 직원들은 최소 3차례 영업 비밀을 유출했습니다. 
한 직원은 오류를 확인하기 위해 기밀 소스 코드를 채팅에 붙여넣었고, 다른 직원은 ChatGPT와 코드를 공유하고 "코드 최적화를 요청했습니다."


ChatGPT는 개발자 OpenAI에서 호스팅하며 민감한 정보는 삭제할 수 없으므로 사용자에게 공유하지 말 것을 요청합니다.

[ 실험과 명확한 지침으로 생성 AI 준비 ]
시스템 통합 제공업체 Insight Enterprises의 글로벌 CTO인 Matthew Jackson은 "그 시점에서는 Google을 사용하는 것과 거의 같습니다."라고 말했습니다. 
“귀하의 데이터는 OpenAI에 의해 저장되고 있습니다. 
그들은 당신이 그 채팅 창에 넣은 모든 것을 사용할 수 있습니다. 
ChatGPT를 사용하여 일반 콘텐츠를 작성할 수는 있지만 해당 창에 기밀 정보를 붙여넣고 싶지는 않습니다."

As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data.

For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.”

ChatGPT is hosted by its developer, OpenAI, which asks users not to share any sensitive information because it cannot be deleted.

“It’s almost like using Google at that point,” said Matthew Jackson, global CTO at systems integration provider Insight Enterprises. “Your data is being saved by OpenAI. They’re allowed to use whatever you put into that chat window. You can still use ChatGPT to help write generic content, but you don’t want to paste confidential information into that window.”

 

Gartner 부사장 저명한 분석가 Avivah Litan에 따르면 결론은 LLM(대형 언어 모델) 및 기타 genAI 애플리케이션이 "완전히 구워지지 않았다"는 것입니다.
그녀는 “정확성 문제, 법적 책임 및 개인 정보 보호 문제, 보안 취약성이 여전히 존재하고 예측할 수 없거나 바람직하지 않은 방향으로 방향을 틀 수 있지만 완전히 사용할 수 있으며 생산성과 혁신을 크게 향상시킵니다.”라고 말했습니다.


최근 Harris Poll에 따르면 비즈니스 리더들이 내년에 genAI 도구를 출시하는 가장 큰 2가지 이유는 수익을 늘리고 혁신을 주도하는 것입니다. 
거의 절반(49%)이 기술 혁신에서 경쟁사와 보조를 맞추는 것이 올해 가장 중요한 과제라고 답했습니다. (Harris Poll은 2023년 4월부터 5월까지 이사 이상으로 근무한 직원 1,000명을 대상으로 설문조사를 실시했습니다.)


설문조사에 참여한 사람들은 직원 생산성(72%)을 AI의 가장 큰 이점으로 꼽았고, 고객 참여(챗봇을 통한)와 연구 개발이 각각 2위와 3위를 차지했습니다.
The bottom line is that large language models (LLMs) and other genAI applications “are not fully baked,” according to Avivah Litan, a vice president and distinguished Gartner analyst. “They still have accuracy issues, liability and privacy concerns, security vulnerabilities, and can veer off in unpredictable or undesirable directions,” she said, “but they are entirely usable and provide an enormous boost to productivity and innovation.”

A recent Harris Poll found that business leaders’ top two reasons for rolling out genAI tools over the next year are to increase revenue and drive innovation. Almost half (49%) said keeping pace with competitors on tech innovation is a top challenge this year. (The Harris Poll surveyed 1,000 employees employed as directors or higher between April and May 2023.)

Those polled named employee productivity (72%) as the greatest benefit of AI, with customer engagement (via chatbots) and research and development taking second and third, respectively.

The Harris Poll/Insight

AI adoption explodes AI 채택 폭발적


컨설팅 회사 Ernst & Young(EY)과 조사기관 The Harris Poll이 실시한 별도의 설문조사에 따르면, 대부분의 비즈니스 리더는 향후 3년내에 직원의 생산성을 높이고 고객 서비스를 향상하기 위해 genAI를 도입할 것으로 예상합니다.
그리고 대다수 CEO는 AI를 제품/서비스에 통합하거나 12개월 이내에 통합할 계획입니다.

Within the next three years, most business leaders expect to adopt genAI to make employees more productive and enhance customer service, according to separate surveys by consultancy Ernst & Young (EY) and research firm The Harris Poll. And a majority of CEOs are integrating AI into products/services or planning to do so within 12 months.



EY는 설문조사 보고서에서 “2023년에는 어떤 기업 리더도 AI를 무시할 수 없다”고 밝혔습니다.
"오늘날 리더 중 82%는 조직이 생성 AI와 같은 디지털 변혁 이니셔티브에 투자하지 않으면 뒤쳐져야 한다고 믿습니다."
“No corporate leader can ignore AI in 2023,” EY said in its survey report. “Eighty-two percent of leaders today believe organizations must invest in digital transformation initiatives, like generative AI, or be left behind.”


시스템 통합 서비스 공급업체 Insight Enterprises가 의뢰한 The Harris Poll의 응답자 중 약 절반은 제품 품질을 보장하고 안전 및 보안 위험을 해결하기 위해 AI를 수용하고 있다고 밝혔습니다.
About half of respondents to The Harris Poll, which was commissioned by systems integration services vendor Insight Enterprises, indicated they’re embracing AI to ensure product quality and to address safety and security risks.

EY가 조사한 미국 CEO 중 42%는 이미 AI 기반 제품 또는 서비스 변경을 자본 배분 프로세스에 완전히 통합했으며 AI 기반 혁신에 적극적으로 투자하고 있다고 답했으며, 38%는 AI 기반 혁신에 대규모 자본 투자를 계획하고 있고, 향후 12개월 동안 기술을 개발할 것이라고 답했습니다. 

Forty-two percent of US CEOs surveyed by EY said they have already fully integrated AI-driven product or service changes into their capital allocation processes and are actively investing in AI-driven innovation, while 38% say they plan to make major capital investments in the technology over the next 12 months.

Insight

The Harris Poll에 따르면 조사 대상자의 절반 이상(53%)이 genAI를 사용하여 연구 개발을 지원할 것으로 예상하고 있으며 50%는 소프트웨어 개발/테스트에 사용할 계획이라고 합니다.

최고 경영진은 genAI의 중요성을 인식하면서도 여전히 경계하고 있습니다. EY 여론조사에 참여한 CEO 중 63%는 이것이 선을 위한 힘이며 비즈니스 효율성을 촉진할 수 있다고 답했지만, 64%는 genAI 사용이 비즈니스와 사회에 미치는 의도하지 않은 결과를 관리하기에는 충분한 조치가 이루어지지 않고 있다고 생각합니다.

두 여론조사에 따르면 “AI의 의도하지 않은 결과”를 고려하여 10개 조직 중 8개 조직이 AI 정책과 전략을 마련했거나 도입을 고려하고 있는 것으로 나타났습니다.

Just over half (53%) of those surveyed expect to use genAI to assist with research and development, and 50% plan to use it for software development/testing, according to The Harris Poll.

While C-suite leaders recognize the importance of genAI, they also remain wary. Sixty-three percent of CEOs in the EY poll said it is a force for good and can drive business efficiency, but 64% believe not enough is being done to manage any unintended consequences of genAI use on business and society.

In light of the “unintended consequences of AI,” eight in 10 organizations have either put in place AI policies and strategies or are considering doing so, according to both polls.

AI problems and solutions AI 문제와 해결책


Gartner의 Risk & Audit Practice 연구 담당 이사인 Ran Xu에 따르면 Generative AI는 Gartner의 2분기 설문 조사에서 두 번째로 가장 자주 언급된 위험으로 처음으로 상위 10위 안에 들었습니다.

Xu는 성명에서 “이는 생성 AI 도구에 대한 대중의 인식과 사용의 급속한 성장, 잠재적 사용 사례의 폭, 따라서 이러한 도구가 야기하는 잠재적인 위험을 모두 반영합니다.”라고 말했습니다.

genAI 앱이 정확하고 사실인 것처럼 보이지만 사실이 아닌 사실과 데이터를 제시하는 환각은 주요 위험입니다. AI 결과물은 의도치 않게 타인의 지적재산권을 침해하는 것으로 알려져 있다. genAI 도구를 사용하면 사용자 정보가 사전 통지 없이 공급업체나 서비스 제공업체 등 제3자와 공유될 수 있으므로 개인정보 보호 문제가 발생할 수 있습니다. 해커는 대규모 언어 모델이 쿼리에 응답하는 방식을 조작하기 위해 "즉시 삽입 공격"이라는 방법을 사용하고 있습니다.

잭슨은 "사람들이 질문을 하고 데이터가 정확하다고 가정하고 부정확한 데이터로 중요한 비즈니스 결정을 내릴 수 있다는 점에서 이는 잠재적인 위험 중 하나"라고 말했습니다. “잘못된 데이터를 사용하는 것이 가장 큰 우려 사항이었습니다. 우리 설문조사에서 2위는 보안이었습니다.

 

”Generative AI was the second most-frequently named risk in Gartner's second quarter survey, appearing in the top 10 for the first time, according to Ran Xu director, research in Gartner's Risk & Audit Practice.

“This reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases, and therefore potential risks, that these tools engender," Xu said in a statement.

Hallucinations, in which genAI apps present facts and data that look accurate and factual but are not, are a key risk. AI outputs are known to inadvertently infringe on the intellectual property rights of others. The use of genAI tools can raise privacy issues, as they may share user information with third parties, such as vendors or service providers, without prior notice. Hackers are using a method known as "prompt injection attacks" to manipulate how a large language model responds to queries.

“That’s one potential risk in that people may ask it a question and assume the data is correct and go off and make some important business decision with inaccurate data,” Jackson said. “That was the number one concern — using bad data. Number two in our survey was security.”

The Harris Poll/Insight

The problems organizations face when deploying genAI, Litan explained, lie in three main categories:

  • Input and output, which includes unacceptable use that compromises enterprise decision-making and confidentiality, leaks of sensitive data, and inaccurate outputs (including hallucinations).
  • Privacy and data protection, which includes data leaks via a hosted LLM vendor’s system, incomplete data privacy or protection policies, and a failure to meet regulatory compliance rules.
  • Cybersecurity risks, which include hackers accessing LLMs and their parameters to influence AI outputs.

Litan은 조직이 genAI를 배포할 때 직면하는 문제는 세 가지 주요 범주에 있다고 설명했습니다.

기업의 의사 결정 및 기밀성을 손상시키는 용납할 수 없는 사용, 민감한 데이터의 유출, 부정확한 출력(환각 포함)을 포함하는 입력 및 출력입니다.
호스팅된 LLM 공급업체 시스템을 통한 데이터 유출, 불완전한 데이터 개인정보 보호 또는 보호 정책, 규제 준수 규칙 미준수 등을 포함한 개인정보 보호 및 데이터 보호.
AI 결과에 영향을 미치기 위해 LLM 및 해당 매개변수에 액세스하는 해커를 포함하는 사이버 보안 위험.
Litan은 이러한 종류의 위협을 완화하려면 계층화된 보안 및 위험 관리 접근 방식이 필요하다고 말했습니다. 조직이 원치 않거나 불법적인 입력 또는 출력의 가능성을 줄일 수 있는 여러 가지 방법이 있습니다.

첫째, 조직은 허용 가능한 사용에 대한 정책을 정의하고 의도된 용도와 요청되는 데이터를 포함하여 genAI 애플리케이션 사용 요청을 기록하는 시스템과 프로세스를 구축해야 합니다. GenAI 애플리케이션을 사용하려면 다양한 감독자의 승인도 필요합니다.

조직은 호스팅된 LLM 환경에 제출된 정보에 대해 입력 콘텐츠 필터를 사용할 수도 있습니다. 이는 허용 가능한 사용을 위해 기업 정책에 대한 입력을 선별하는 데 도움이 됩니다.

개인 정보 보호 및 데이터 보호 위험은 신속한 데이터 스토리지 호스팅을 선택 해제하고 공급업체가 모델 교육에 기업 데이터를 사용하지 않도록 함으로써 완화될 수 있습니다. 또한 기업은 LLM 환경의 데이터 보호에 대한 규칙과 책임을 정의하는 호스팅 공급업체의 라이센스 계약을 철저히 검토해야 합니다.Mitigating those kinds of threats, Litan said, requires a layered security and risk management approach. There are several different ways organizations can reduce the prospect of unwanted or illegitimate inputs or outputs. 

First, organizations should define policies for acceptable use and establish systems and processes to record requests to use genAI applications, including the intended use and the data being requested. GenAI application use should also require approvals by various overseers.

Organizations can also use input content filters for information submitted to hosted LLM environments. This helps screen inputs against enterprise policies for acceptable use.

Privacy and data protection risks can be mitigated by opting out of hosting a prompt data storage, and by making sure a vendor doesn’t use corporate data to train its models. Additionally, companies should comb through a hosting vendor’s licensing agreement, which defines the rules and its responsibility for data protection in its LLM environment.

Gartner

Lastly, organizations need to be aware of prompt injection attacks, which is a malicious input designed to trick a LLM into changing its desired behavior. That can result in stolen data or customers being scammed by the generative AI systems.

Organizations need strong security around the local Enterprise LLM environment, including access management, data protection, and network and endpoint security, according to Gartner.

Litan recommends that genAI users deploy Security Service Edge software that combines networking and security together into a cloud-native software stack that protects an organization’s edges, its sites and applications.

Additionally, organizations should hold their LLM or genAI service providers accountable for how they prevent indirect prompt injection attacks on their LLMs, over which a user organization has no control or visibility.

마지막으로 조직은 LLM을 속여 원하는 동작을 변경하도록 설계된 악의적인 입력인 프롬프트 주입 공격을 인식해야 합니다. 이로 인해 데이터가 도난당하거나 생성 AI 시스템에 의해 고객이 사기를 당할 수 있습니다.

Gartner에 따르면 조직에는 액세스 관리, 데이터 보호, 네트워크 및 엔드포인트 보안을 포함하여 로컬 Enterprise LLM 환경에 대한 강력한 보안이 필요합니다.

Litan은 genAI 사용자가 조직의 엣지, 사이트 및 애플리케이션을 보호하는 클라우드 네이티브 소프트웨어 스택에 네트워킹과 보안을 결합하는 Security Service Edge 소프트웨어를 배포할 것을 권장합니다.

또한 조직은 LLM 또는 genAI 서비스 제공자에게 사용자 조직이 제어하거나 가시성을 가질 수 없는 LLM에 대한 간접 프롬프트 주입 공격을 방지하는 방법에 대한 책임을 져야 합니다.

AI의 장점은 위험보다 클 수 있습니다
기업이 저지르는 실수 중 하나는 AI를 사용하는 위험을 감수할 가치가 없다고 결정하는 것입니다. 따라서 "대부분의 기업이 내놓는 첫 번째 정책은 '사용하지 마세요'입니다."라고 Insight의 Jackson은 말했습니다.

“이것이 우리의 첫 번째 정책이기도 했습니다.”라고 그는 말했습니다. “그러나 우리는 Azure 기술에 Microsoft OpenAI를 사용하여 매우 신속하게 개인 테넌트를 구축했습니다. 그래서 우리는 일부 민간 기업 데이터에 연결할 수 있는 안전한 환경을 만들었습니다. 그래서 우리는 사람들이 그것을 사용할 수 있도록 허용할 수 있었습니다.”

 

AI’s advantages can outweigh its risks

One mistake companies make is to decide that it’s not worth the risk to use AI, so “the first policy most companies come up with is ‘don’t use it,’” Insight’s Jackson said.

“That was our first policy as well,” he said. “But we very quickly stood up a private tenant using Microsoft’s OpenAI on Azure’s technology. So, we created an environment that was secure, where we were able to connect to some of our private enterprise data. So, that way we could allow people to use it.”

IDC

한 Insight 직원은 생성 AI 기술이 Excel과 비슷하다고 설명했습니다. “사람들에게 Excel을 주기 전에 Excel을 어떻게 사용할 것인지 묻지 않습니다. 그냥 그들에게 주면 그들은 그것을 사용할 수 있는 창의적인 방법을 생각해 냅니다.”라고 잭슨은 말했습니다.

Insight는 기술에 대한 회사의 경험을 고려하여 genAI 사용 사례에 대해 많은 고객과 이야기를 나눴습니다.

“일부 파일럿을 통해 우리가 깨달은 것 중 하나는 AI가 실제로는 일반적인 생산성 도구일 뿐이라는 것입니다. Jackson은 이렇게 말했습니다. "너무 많은 사용 사례를 처리할 수 있습니다....우리가 결정한 것은 지나치게 사용자 정의하기 위해 길고 지루한 프로세스를 거치는 대신 일부 부서에 배포하기로 결정했습니다. 그들이 할 수 있는 것과 할 수 없는 것에 대한 몇 가지 일반적인 프레임워크와 경계를 정한 다음 그들이 생각해낸 것이 무엇인지 살펴보세요.”

Insight Enterprises가 ChatGPT를 사용한 첫 번째 작업 중 하나는 고객이 기술을 구매하고 회사가 해당 장치를 이미지화하여 고객에게 보내는 유통 센터였습니다. 프로세스는 제품 상태 및 공급 시스템 업데이트와 같은 일상적인 작업으로 채워집니다.

Jackson은 “그래서 우리 창고에 있는 직원 중 한 명이 생성 AI에 이러한 시스템 업데이트 중 일부를 자동화하는 스크립트를 작성하도록 요청할 수 있는지 깨달았습니다.”라고 말했습니다. "이것은 조직 전반에 걸쳐 Insight GPT라고 불리는 자체 비공개 기업 ChatGPT 인스턴스의 크라우드 소싱에서 나온 실제 사용 사례였습니다."

생성적 AI 프로그램은 상당한 수의 작업을 자동화하고 SAP 재고 시스템에 대해 실행할 수 있는 시스템 업데이트를 활성화하는 Insight의 창고 운영을 위한 짧은 Python 스크립트를 작성했습니다. 사람들이 업데이트해야 할 때마다 5분씩 걸리는 작업을 본질적으로 자동화했습니다.

“그래서 우리 창고의 생산성이 크게 향상되었습니다. 해당 센터의 나머지 직원들에게 이를 배포하자 일주일에 수백 시간이 절약되었습니다.”라고 Jackson은 말했습니다.

이제 Insight는 더 많은 사용자 정의가 필요할 수 있는 중요한 사용 사례의 우선 순위를 지정하는 데 중점을 두고 있습니다. 여기에는 프롬프트 엔지니어링을 사용하여 LLM을 다르게 교육하거나 보다 다양하거나 복잡한 백엔드 데이터 소스를 연결하는 것이 포함될 수 있습니다.

Jackson은 LLM을 회사 데이터를 제외하고 일반적으로 몇 년 된 데이터에 대해 훈련된 사전 훈련된 "블랙 박스"라고 설명했습니다. 그러나 사용자는 고급 검색 엔진과 같은 기업 데이터에 액세스하도록 API에 지시할 수 있습니다. “그래서 그렇게 하면 더 관련성이 높고 최신 콘텐츠에 액세스할 수 있게 됩니다.”라고 그는 말했습니다.

Insight는 현재 계약서 작성 방법을 자동화하는 프로젝트에서 ChatGPT와 협력하고 있습니다. 회사는 표준 ChatGPT 4.0 모델을 사용하여 이를 수만 개가 있는 기존 계약 라이브러리에 연결했습니다.

조직은 LangChain 또는 Microsoft의 Azure Cognitive Search와 같은 LLM 확장을 사용하여 생성 AI 도구가 제공되는 작업과 관련된 기업 데이터를 검색할 수 있습니다.

Insight의 경우, genAI는 회사가 어떤 계약을 체결했는지 파악하고 우선순위를 정한 다음 CRM 데이터와 상호 참조하여 고객을 위한 향후 계약 작성을 자동화하는 데 사용됩니다.

표준 SQL 데이터베이스 또는 파일 라이브러리와 같은 일부 데이터 소스는 연결하기 쉽습니다. AWS 클라우드 또는 맞춤형 스토리지 환경과 같은 다른 솔루션은 안전하게 액세스하기가 더 어렵습니다.

“많은 사람들이 자신의 데이터를 모델에 추가하려면 모델을 재교육해야 한다고 생각하는데, 절대 그렇지 않습니다. 모델이 어디에 있고 어떻게 실행되는지에 따라 실제로 위험할 수 있습니다.”라고 Jackson은 말했습니다. "Azure 내에서 이러한 OpenAI 모델 중 하나를 쉽게 구축한 다음 해당 프라이빗 테넌트 내에서 데이터에 연결할 수 있습니다."

"역사는 사람들에게 올바른 도구를 제공하면 생산성이 향상되고 자신에게 이익이 되는 새로운 작업 방식을 발견하게 된다는 사실을 알려줍니다."라고 Jackson은 덧붙였습니다. "이 기술을 수용하면 직원들에게 업무 방식을 발전시키고 향상시킬 수 있는 전례 없는 기회를 제공하고 일부에게는 새로운 경력 경로를 발견할 수도 있습니다."

 

One Insight employee described the generative AI technology as being like Excel. “You don’t ask people how they’re going to use Excel before you give it to them; you just give it to them and they come up with all these creative ways to use it,” Jackson said.

Insight ended up talking to a lot of clients about genAI use cases considering the firm’s own experiences with the technology.

“One of the things that dawned on us with some of our pilots is AI’s really just a general productivity tool. It can handle so many use cases," Jackson said. "...What we decided [was] rather than going through a long, drawn-out process to overly customize it, we were just going to give it out to some departments with some general frameworks and boundaries around what they could and couldn’t do — and then see what they came up with.”

One of the first tasks Insight Enterprises used ChatGPT for was in its distribution center, where clients purchase technology and the company then images those devices and sends them out to clients; the process is filled with mundane tasks, such as updating product statuses and supply systems.

“So, one of the folks in one of our warehouses realized if you can ask generative AI to write a script to automate some of these system updates,” Jackson said. "This was a practical use case that emerged from Insight’s crowd-sourcing of its own private, enterprise instance of ChatGPT, called Insight GPT, across the organization."

The generative AI program wrote a short Python script for Insight’s warehouse operation that automated a significant number of tasks, and enabled system updates that could run against its SAP inventory system; it essentially automated a task that took people five minutes every time they had to make an update.

“So, there was a huge productivity improvement within our warehouse. When we rolled it out to the rest of the employees in that center, hundreds of hours a week were saved,” Jackson said.

Now, Insight is focusing on prioritizing critical use cases that may require more customization. That could include using prompt engineering to train the LLM differently or tying in more diverse or complicated back-end data sources.

Jackson described LLMs as a pretrained “black box,” with data they’re trained on typically a couple years old and excluding corporate data. Users can, however, instruct APIs to access corporate data like an advanced search engine. “So, that way you get access to more relevant and current content,” he said.

Insight is currently working with ChatGPT on a project to automate how contracts are written. Using a standard ChatGPT 4.0 model, the company connected it to its existing library of contracts, of which it has tens of thousands.

Organizations can use LLM extensions such as LangChain or Microsoft’s Azure Cognitive Search to discover corporate data relative to a task given the generative AI tool.

In Insight’s case, genAI will be used to discover which contracts the company won, prioritize those, and then cross-reference them against CRM data to automate writing future contracts for clients.

Some data sources, such as standard SQL databases or libraries of files, are easy to connect to; others, such as AWS cloud or custom storage environments, are more difficult to access securely.

“A lot of people think you need to retrain the model to get their own data into it, and that’s absolutely not the case; that can actually be risky, depending on where that model lives and how it’s executed,” Jackson said. “You can easily stand up one of these OpenAI models within Azure and then connect in your data within that private tenant.”

“History tells us if you give people the right tools, they become more productive and discover new ways to work to their benefit,” Jackson added. “Embracing this technology gives employees an unprecedented opportunity to evolve and elevate how they work and, for some, even discover new career paths.”

강아지건강 전문

www.dopza.com  

 

돕자몰 dopzaMall

 

www.dopza.com

 

반응형

댓글