CaixaBank is using big data and its partnership with Oracle to develop a new technology platform to improve business and better anticipate customer needs with a 360 degree view of customers. CaixaBank consolidated 17 data marts into one centralized data pool built on Oracle technologies. This has improved customer relationships, employee efficiency, and regulatory reporting. The data pool collects data from various sources to power business use cases like deposits pricing, customized ATM menus, online risk scoring, and online marketing automation.
Document Owner : petr.hosek@oracle.com (Senior Director, EMEA BI & Big Data Consulting Sales & Services Portfolio)
Singapore Seminar Speaker : prashant.x.shukla@oracle.com (APAC Consulting Solutions Director)
Support : jason.chia@oracle.com, dong-ho.lee@oracle.com
안녕하세요. 오라클 컨설팅 사업부에서 Big Data와 BI Solution의 Delivery를 담당하고 있는 윤충식 입니다.
오늘 Big Data 생생 체험 세미나를 통해서 유럽의 업종별 최신 사례를 접하실 수 있을 것으로 기대합니다.
Big Data와 관련하며 프로젝트를 준비하거나, 고민 중이신 모든 고객 분들께서 좋은 기회가 되었으면 합니다.
오늘 저는 스페인의 리딩 뱅크의 하나인 CaixaBank에서 구축한 Big Data 프로젝트의 내용을 설명 드리고자 합니다.
카시아 뱅크는 1,370 만 명의 고객을 보유한 리딩 뱅크로, 오라클의 빅데이터 Infrastructure를 통해서 Powerful하고 보안이 보장된 Data Repository를 구축한 사례 입니다.
카시아 뱅크는 본 프로젝트를 수행하기 위해서, Big Data에 대한 High Level Solution Knowledge와 전략적 정의를 도출하고, End to End Solution,완전한 Infrastructure Solution을 통해서 그 전략을 구현하고자 하였습니다.
CaixaBank is one of the leading bank in the Spanish Market with a Customer base of 13.7 million.
CaixaBank is implementing Oracle Big Data Infrastructure to create a powerful and secure data repository.
CaixaBank achieves four Big Data goals, by teaming with OPN Diamond partner, Accenture, valuing their high-level solution knowledge and strategy definition, and selecting Oracle Exalytics, Oracle Big Data Appliance and Oracle Exadata.
Agenda for this session
Note Petr: I do not think we need the Agenda slides, as we have only 3 sections !!!
That’s why I have hidden them.
Agenda for this session
금융 시장의 중요한 변화와 유럽 마켓에서의 리더가 되기 위해서 고민한 카시아 뱅크의 구현 내용, 그리고, 고객사에서 빅데이터 프로젝트를 준비 하실 때, 오라클이 무엇을 어떻게 지원 해 드릴 수 있는 지의 순서로 말씀 드리겠습니다.
과거에 역사적으로 전통적인 금융 즉, 은행/보험/증권 그리고 통신 시스템은 Product 중심의 비즈니스 였습니다. 그리고 계정은 상품을 활성화하기 위해서 존재하고, 고객정보는 그 계정에 Attach된 정보로서 존재하였습니다. 고객은 계층 구조에서 3번째에 존재하게 되었습니다. 즉, 상품 및 계정 다음이었습니다. 이러한 Product Out 비즈니스의 전형적인 사례로 신용카드의 예를 들면, Family Card/ House Hold를 판매하기 위해서, 신규 계정을 등록하고, 가족정보나 세대정보를 기록하게 됨으로써, Customer In이 아니라 Product Out의 비효율성이 존재하게 된 것입니다.
은행/보험/카드사의 경우 고객중심의 서비스로의 변화가 필요하게 되었으며, 이러한 서비스는 고객을 중심에 놓고 모든 상품이 Cross로 사용되는 구조로의 변화입니다, 이제는 모든 계층의 Top에 “고객”이 존재하여야 한다는 것 입니다. 이러한 Customer In의 “layered” Approach의 출발점은 “고객의 경험”을 이해하기 위한 Addressing이며, 조직의 여러 부분에서 발생하는 복잡한 변화로 부터 “고객을 보호”하는 과정으로 전개되어 집니다. 그리고 구현하기 위해서는 비즈니스와 기술적인 아키텍처가 필요하고,이러한 아키텍처는 Digital Experience, Digital Engagement를 통해서 Delivery 되어집니다. 그리고, 모든 고객에게 Right Time에 고객의 니즈를 충족 시킬 수 있도록 합니다.
Mortgage : Loan
Eligibility : 자격,규정
Historically, in banking and insurance, communications were all product-centric businesses. An account would be activated for a product and a customer name would be attached to the account.
“Customer” was third in the hierarchy, behind products and accounts. Credit Cards are a classic example of product out business – families / households get a few dozens of card offers from the same bank - very inefficient and leads to the customer asking “ Do you know me as a Customer?”
Banks and Insurers need to build a common set of customer-centric services that are used across all products, with the customer at the top of the hierarchy. The “layered” approach allows you to start addressing the customer experience while “protecting” customers from seeing the complexity of changes happening throughout the rest of the organisation.
For Customer In you need a business and technology architecture that delivers Digital Experience, Digital Engagement, and Componentized Core to take the prospect off the market and get to an yes at the shortest possible time to existing customers.
--
Customer in의 중심에는 데이터가 존재하고, 이러한 데이터를 값어치 있게 전환하는 것이 바로 리더 입니다.
실제 디지털은행이 되기 위해서는 새로운 수행 능력이 요구되는 데, Foundation , 기본적인 것일 수도 있고, 변화,혁신에 의한 것 일수 도 있습니다. 그리고 비용 절감 측면과 수익 증가측면에서 정의 될 수 있습니다. 빅데이터는 실질적인 매출 향상의 가능성을 확보 할 수 있는 초석이 될 수 있습니다.
카시아 뱅크의 세밀하게 정제된 세그멘테이션에 의한 “360 Degree "View는 Transformational 의 Use Care를 지원하기 위한 기본적인 Capability 입니다. 그리고 수익성 향상은 고객별로 특화된 실시간 Pricing을 지원하기 위한 능력으로 부터 발생됩니다. 그리고 실시간 Pricing은 고객의 실시간 Transaction과 실시간 Off를 생성하기 위한 Context를 Control 할 수 있어야 합니다. 그리고, 고객별로 Up Sell하기 위한 고객 특화 번들은 매번, Right Time에 Selling 할 수 있어야 합니다.
고객에 대한 깊은 이해에 기반하여, 적극적으로 고객과의 Communication을 수행 할 수 있으며, 이는 고객의의 Interaction의 Context를 정확한 시간에 정확히 이용 할 수 있어야 가능하며, 상품 중심의 매출 증대의 한계를 극복 할 수 있는 것입니다. 예를 들면, 고객의 직불카드를 사용 정보를 실시간 Message를 통해서 Communication 한다면, 어디에서 (특정 지역(워싱턴)에서 또는 집에서), 어떤 거래가 (지불하거나 취소) 이뤄졌는지를 활용 할 수 있으며, 외부 데이터를 활용 할 수 있는 능력에 의해서 정학한 정보와 정확한 결과를 활용 할 수 있는 것입니다.
카시아뱅크는 Data Pool을 통해서 이러한 비즈니스를 시작하였습니다. 이러한 신규 데이터 플랫폼은 이러한 고객의 360 Degree View에 대한 니즈를 충족 시킬 수 있는 기반이 되었습니다.
Juan Maria Nin, CEO CaixaBank
“카시아 뱅크의 유럽의 리딩 은행이 되겠다는 전략적 목표 중에 하나는 고객 데이터의 분석 및 빅데이터를 활용하여 비즈니스 Value를 Generation 하는 것이었습니다.
그리고 그 전략을 실행하기 위해서, Customer 360 View를 통한 고객의 이해와 고객의 니즈에 부합하기 위한 시스템 구축을 완성하였다.”
[animated version of the slide – transitions to second topic of the pitch, Caixa]
In order for banks to become real digital banks, new capabilities are required. These can be either Foundational or Transformational, leading to lower Costs or Uplifting Revenue. (Big) Data is the cornerstone of the capabilities that create substation revenue uplift potential.
“360 degree” view of the customer aided by fine-grained segmentation information, are the foundational capabilities in support of the transformational use cases. Revenue uplift will come from the ability to deliver customer specific pricing in real-time. It is here, in this stage, you will have control of the context of the customer transaction to deliver a price or make an offer in real time. Up sell a customer-specific bundle that is presented at the right time every time.
Selling by product silos can completely disappear and with the deep insight about the customer and context, you can be proactive in the engagement. Example enabling a switch to withdraw or pay in USD using the debit card, with clear communication on how it costs to use, when you see a message or can locate the customer away from home and in Washington DC. This is how the telcos do it today. Ability to use data from outside the four walls of your enterprise, process it inline and deliver the right results.
CaixaBank started its data pool initiative just to do that. As you can read in the quote of Mr Nin, CEO of CaixaBank, the bank will create, in a partnership with Oracle, a new
data platform to enable the bank to anticipate the needs of customers with a 360 degrees view of these customers.
==
Uplift Revenue : Increase Revenue
Personal Financial Management (PFM) refers to software that helps users manage their money. PFM often lets users categorize transactions and add accounts from multiple institutions into a single view.
PFM also typically includes data visualizations such as spending trends, budgets and net work
STP : Segmentation Targeting Positioning
Agenda for this session
카시아뱅크는 스페인의 리딩 뱅크 중에 하나 입니다. Branch 시장점유율이 16.8%이고, 실질적인 거래 기준의 통합 점유율 (absorption)은 ATM이 74.25%,인터넷 뱅킹은 81.17% 에 달하는 은행입니다.
스페인 인구 4,670 만 명중에 1,320 만 명의 고객을 보유하고 있습니다.
CaixaBank, S.A. (Catalan pronunciation: [ˌkaʃəˈbaŋ]), formerly Criteria CaixaCorp, is a Spanish financial services company owned by the Catalan savings bank La Caixa with a 72.76% stake.[2] Headquartered in Barcelona, the company consists of the universal banking and insurance activities of the La Caixa group, along with the group's stakes in the oil and gas firm Repsol YPF, the telecommunications company Telefónica and its holdings in several other financial institutions. Isidre Fainé is the Chairman of the company, having replaced Ricard Fornesa Ribó in May 2009,[3] and since June 2014 its CEO is Gonzalo Gortázar. It is Spain's third-largest lender by market value and with 5,695 branches to serve its 13.2 million customers, CaixaBank has the most extensive branch network in the Spanish market
http://medianetwork.oracle.com/video/player/3843337229001
CaixaBank achieves four Big Data goals, by teaming with OPN Diamond partner, Accenture, valuing their high-level solution knowledge and strategy definition, and selecting Oracle Exalytics, Oracle Big Data Appliance and Oracle Exadata.
Consolidate 17 data marts into ONE
Improve relationships with customers by offering better products.
Improve employee efficiency : Monitoring System
Centralize regulatory information : 중앙통제
http://medianetwork.oracle.com/video/player/3843337229001
CaixaBank achieves four Big Data goals, by teaming with OPN Diamond partner, Accenture, valuing their high-level solution knowledge and strategy definition, and selecting Oracle Exalytics, Oracle Big Data Appliance and Oracle Exadata.
Consolidate 17 data marts into ONE
Improve relationships with customers by offering better products.
Improve employee efficiency : Monitoring System
Centralize regulatory information : 중앙통제
Business Needs
Provide agile, timely response to growing regulatory pressure (e.g. European Stress Tests)
Enable full 360º view of the customer
Democratization of information, from siloed organization and information to a data model pool to promote creativeness and productiveness
IT Needs
Getting an holistic and unified vision of internal and external data used by CaixaBank business processes along its lifecycle: ingestion, production, storage, transformation and consume.
Increase agility, transparency and security in the use of data, improving the capacity for adapting to the changes and business requirements
Incorporating Advanced Analytic and Data Discovery tools to identify correlations, new data to ingest, new attributes and patterns to add business value
Improve the quality of service in informational systems assuring high availability, contingency add data protection
Major Focus
1) Big Data & Real Time Bidding
2) Extreme Personalization
3) Social Network Analysis
4) Analysis Factory
카시아 뱅크는 단계적 접근 방식으로 원하는 목표를 달성하였습니다.
먼저, 기존에 흩어져 있던 데이터 마트를 포함하여 모든 데이터를 Data Pool + Data Factory Engine을 통해서 하나로 통합 하였습니다.
그리고 나서 제반 Application과 Use Case를 Data Pool에 기반하여 구축하였습니다.
카시아 뱅크의 Data Pool의 개념은, 은행의 정보 자산으로 부터 비즈니스 Value를 극대화하기 위한 전략적 이니셔티브에서 출발하였고, 아래 4가지를 중점적으로 추진하게 되었습니다.
데이터의 생성 주기에 따른 내,외부의 모든 데이터를 비즈니스에 의미 있는 Value를 찾고 활용하기 위한 단일화되고 완전한 단일 View 구성
Time to Market를 지원하기 위한 구성 (보안, 투명,민첩한 대응 증대)
심도 있는 분석 및 예측 기능 강화
Quality Service의 증대
CaixaBank “Data Pool” Strategic Initiative Maximizing Business Value from Informational Assets
Complete and Unified View of internal and external data meaningful for Business in all the data lifecycle/stages (online/production, staging, enterprise, consume…)
Increase agility, transparency and security in using data, much more flexible to address emerging business needs and meet new requirements coming from the lines of business (time-to-market)
Capable of Data discovery and Advanced Analytics, able to find patterns and correlations, new uses and transformations, ingest new data regardless of the format, and flexible to introduce new attributes for creating new value add
Increase Quality of Service, providing Data Protection, High Availability, Recovery and Contingency to every kind of data without affecting operations
CaixaBank has created a strategic initiative by the name of “Data Pool”, which can be summarized as “the extraction of the maximum business value from any kind of data, regardless of its type, its origin and its consumption model”. The Big Data project “is aimed at ingesting and making available across The Bank any piece of information demanded by the business: Smart Banking, Sentiment Analysis, Customer Behaviour patterns, Artificial Intelligence, and more.”
The Data Pool Initiative is not driven by a single or a set of concrete business cases to be addressed in a short or medium term. It is driven instead as the strategic approach to the Bank’s new Information Management Architecture for the coming years. Based on that the various business initiatives will be implemented. Some examples are:
Deposits Pricing: creating a framework for pricing liabilities and control heading pricing which promotes the customer relationship
ATMs Customized Menus: customizable buttons and operations, e.g. voice guidance for blind people
Online Risk Scoring: "immediate" granting of a credit card to No-Customers from their card and / or account number in another entity
Online Marketing Automation, offers at the right time, right location via the preferred channel
Sentiment Analysis
지금 보시고 있는 슬라이드는 카시아 뱅크의 Data Pool을 하나로 모은 시스템 구성을 보여주고 있습니다.
Big Data Appliance
Exadata
Exalytics
지금 보시고 있는 슬라이드는 카시아 뱅크의 Data Pool을 하나로 모은 시스템 구성을 보여주고 있습니다.
오라클의 엔지니어드 시스템 기반의 Data Pool로서, 이중화로 구성된 내용 입니다.
Big Data Appliance : 비정형 데이터를 수집 및 체계화하고 Oracle Database로 로딩하는 데 최적화된 엔지니어드 시스템
Exadata : DW와 OLTP Application 모두에 최고의 성능을 제공하는 데이터 베이스 전용 엔지니어드 시스템
Exalytics : In-Memory S/W와 H/W , Visualization 에 최적화된 BI Platform 으로 구성된 엔지니어드 시스템
오라클이 보유한 End to End 솔루션을 통해서 Fast Implementation, Risk 최소화가 적용된 사례 입니다.
구축 기간 18개월
전체 Data Volume은 1.7 Peta Data
Data Pool” approach could be reused in other customers/ verticals
Strategic approach providing vision and architecture is a differentiator
Executive sponsorship is important
Oracle-on-Oracle strategy is key enabler
Intensive use of extended team (OCS, Enterprise Architects, ISG, ...) is fundamental
Oracle Services Leadership is key:
Support the initial project(s)
Help filling gaps at customer side
Many emerging technologies require specialized skills
Oracle Information Management Reference Architecture의 Conceptual View는 Execution Layer와 Innovation Layer로 구분 되어집니다.
Execution Layer은 데이터의 원천으로 부터 활용에 이르는 데이터의 흐름과 관련된 영역과 Data Mining, Big Data Discovery 영역인 Innovation Layer로 나뉘어 집니다.
Execution Layer은 Input Events 에 대한 발생 및 처리와 관련된 Events 엔진과 Data Factory 로 정의 되어 있습니다
Execution 영역의 우측은 Information Platform이고, Data Application은 Data Factory를 기준으로 좌측이고, Information Solution은 Data Factory의 우측입니다.
Execution 영역의 좌측은 Real Time Events 영역 입니다.
Data Factory Engine은 Code Generation, New Data Source의 ETL Procedure, Schedule 및 Job Dependence, Protect & Audit, Monitoring & Reporting 기능을 담당하는 엔진입니다.
--
Oracle Flume
Metadata
Ingestion : 신규 소스의 경우, 데이터 저장소에 정보가 자동적으로 반영 될 수 있도록 지원하기 위한 모듈
Execution and Scheduling : Scheduling the loadings by dependencies and resources available
Re-Use : Guidelines & Best practices - Rules and guidelines for developing projects, Modelling support - Tools for supporting the modeling of the structured information and also the metadata associated, Application Governance - Metadata management & project configuration data maintence
Data Validation and Quality : Code Generation -Speeding the development of projects by providing code generators & knowledge modules
Audit and Design : Audit - Recording the operations executed by users, Access Control - Managing the access to the information stored. Object, row & column filters based on metadata
Data Management Promotion : Data Management - Data lineage & impact analysis of changes
1. User requests a copy of a content to the Staging Layer :
A user requests access to a content of Staging Layer for a list of people or for a role within the organization through the application:
Name of the original content
Name to give to the content into the staging
Persistency policy
Usage type
Expiration time
2. Ingestion Specialist check the request complete the data and grant permissions (also based on resources), if needed apply a charge-back function:
a) DFE proposes a name for the columns to include
b) For names grater than 30 characters make a warning and asks for the name
c) The Ingestion Specialist and the User enter the names of the columns
d) User/group will be assigned to files defined as consumer of information
3.DFE Acquire Data Format
Based on the entered metadata
a) DFE capture Format Data Definition from metadata defined by ingestion specialist
b) DFE Capture Format Data Definition from sources using native ODI functionality and connectors
c) DFE load metadata sources into ODI metadata
d) DFE generate ODI metadata describing process and execution steps to execute
4. Code Execution by DFE/ODI
a) DFE, thru ODI, Create a copy inside the Data Reservoir and then the Discovery Lab
b) DFE, thru ODI, Register metadata with assigned column names
c) DFE Assign privileges and publish information on all required level
d) DFE allows copy management:
Refresh the content
Manage Life-Cycle of Raw Data (age-out)
Warn about expiration
5. User Discovery Data & Reworks
a) User Analyze & Discover Data Structure changes Using Big Data Discovery features
b) User and Ingestion Specialist update Data Format
c) DFE Regenerate all Related Metadata without copying again Data
카시아 뱅크의 Data Pool의 Benefit은 아래와 같이 Announce 되었습니다.
Data Mart consolidation
ETL Job의 30% 절감
Evolution to near-real-time for Informational Systems , “Reduce time-to-market, increase time-to-value and alignment to business requirements (Estimated 70% 개선)
OPEX 20% 절감
New Data Marts and Consume Structures
중복 데이터 제거, 통제를 단순화 (Simplifying and Controlling data access, allowing relations without data duplication )
비즈니스 요구사항에 대한 즉시 지원 체계 강화 (Increase agility by unified and consistent vision of business concepts from the data, Better time-to-market and response to business requirements)
Advanced Analytics
고급 분석 기능의 즉각적인 활용 가능 (Advanced Analytics against very large volumes of data, enabling fast decision and increasing detection of data patterns and relations over existing conventional methodologies )
인 메모리 기능을 이용한 TCO 절감 (Reduce TCO by embracing In-Memory capabilities)
Any type of Data
비정형 데이터를 포함하는 신규 데이터 소스를 포함하는 개선된 Deploy를 통한 비용 절감 Cost reduction by a incremental and progressive deployment of the “Data Pool”, including new data sources and not structured data : While reducing operational complexity )
데이터 증가에 따른 값어치 있는 정보를 생성하여, 지식과 비즈니스 값어치를 증대 시킴 (Data augmentation: Enrich Information to increase knowledge and Business Value_
다양한 패턴, 소스 데이터를 바탕으로 시장, 고객,상품의 변경 패턴을 언제던지 분석 할 수 있는 Sandbox를 구성하고 즉각적인 분석을 지속적으로 수행 할 수 있는 기반을 마련했고,
분석 기능 강화를 바탕으로,Compliance 준수를 위한 규제 강화, 신용 리스트에 대한 대응, 이탈 방지, 영업점별 인사관리, Trading 분석, Data Aging (the process of removing old data from secondary storage to allow the associated media to be reused for future backups)
Real Time Processing : 도용, 실시간 상품 추천, 실시간 위치 기반 마케팅
Data Governance
Agenda for this session
이제, Big Data를 준비하시는 고객 분들에게, 오라클과 함께 할 수 있는 오라클 컨설팅 서비스를 소개하여드리겠습니다.
Big Data를 적용하시기 위한 비즈니스 Use Case의 발굴, 아키텍처의 구성, Pilot 프로젝트의 수행 등이 있으며, 순차적으로 적용 하실 필요는 없습니다.
고객사의 상황 및 준비 상태에 따라서 Optional 한 서비스 들입니다.
빅데이터는 기회이자 위협요소 입니다. 비즈니스의 변화에 얼마나 긴밀하게 대응하는 가에 따라서 기회 일 수도 있고, 뒤떨어 질 수도 있습니다.
우리는 오늘 카시아 뱅크 사례를 통해서 보다 일찍 빅데이터를 통한 비즈니스 활용 사례를 접하게 된 것이고, 데이터의 통합과 관리를 Big Data 기반으로 업그레이드해서 새로운 분석과 Application을 빅데이터라는 큰 틀로 옮겨가야 합니다. 이러한 변화에 적응하는 것은 비즈니스와 IT 양 부분에서 상당히 많은 부분을 준비하여야 합니다. 단일 Lob에서 준비할 것이 아니라 전사적으로 전 조직에 관여하여야 하기 때문 입니다.
오라클의 빅데이터 이노베이션 워크샵 시리즈는 고객 분들에게 비즈니스 전략, 빅데이터 아키텍처, Implementation Direction 에서 도움을 드릴 수 있습니다.
이러한 거대한 변화에 모든 고객 분들이 완벽하게 사전 준비를 하시기는 어렵습니다. 단지 다음 단계가 Foundation을 재정립하는 것일 수도 있고, 데이터 통합의 Capability를 업그레이드 하는 것일 수도 있고, 데이터 관리의 Critical 부분을 해소하거나, 현재 EDW를 현대화하고, 하둡 기반의 데이터 저장소를 추가하는 경우 일수도 있습니다.
다만, 목표는 새로운 데이터가 조직의 나머지 부분에 쉽게 사용 될 수 있도록 완벽하게 통합하기 위한 준비를 하자는 것입니다. 이 경우에 무엇을 어떻게 시작해야 하는 지, 식별 할 수 있도록 아키텍처 워크샵을 통해서 도와 드릴 수 있습니다.
오라클이 준비하고 있는 여려 서비스 통해서 작은 무언가를 선택하신다면, 짧은 시간에 미래의 불투명한 위험을 제거하고 작게 출발 할 수 있다는 것입니다.
미래의 점증적이고 반복적인 과정을 통해서 하둡을 적용하실 수도 있고, 이미 하둡 클러스터를 구성하셨다면 빅데이터 어플라이언스를 통해서 빠르게
고객사에서 시행 해 볼 수 있게 도움을 드릴 수 있고, Big Data discovery Workshop을 통해서 고객사에서 무엇을 할 수 있을 지를 찾을 수도 있는 것입니다.
Here are next steps for three different big data appetites. These aren’t sequential choices. They are options, based on where you are.
Some organizations are seeing the threat or the opportunity around big data and feel that the correct response is a comprehensive transformation of the business.
This is the kind of approach that CaixaBank are taking as, you heard earlier. Doing this requires that you touch all different aspects of that big data wheel, from upgrading your data integration and management to creating new analytics and applications. The potential payoff is huge, but it does require some significant work, both on the technical side, but more importantly on the business side. Because getting everybody in alignment to make all of this happen across a large organization is complex. We can help with a big data innovation workshop series, advising on and guiding the formation of your business strategy, architecture, and implementation.
Not everybody is ready for that scale of transformation, and that’s perfectly OK. For some companies we work with or talk to, the next step is to build something of a foundation. That means working to upgrade their data integration capabilities. And it’s critical to look at data management, perhaps expanding or modernizing a data warehouse, and adding a Hadoop-based data reservoir. With the goal being to get those two environments seamlessly integrated together so that new data is easily available to the rest of the organization. Again, we can help start that process with an architecture workshop to help identify what you can do that will deliver the most value to your company.
One of the best pieces of advice on getting started with big data is to pick something that’s smaller, delivers some worthwhile value, but does it in a short time frame. It gives you an opportunity to take a lower cost, lower risk first step that can lead to bigger things in future iterations. And here we would recommend looking at a discovery project on Hadoop.
If you have an existing Hadoop cluster you can work with that, or remember that using the Big Data Appliance will get that cluster up and running quicker and cheaper than if you build it yourself. And then use Big Data Discovery to take a look at that new data and see what it can do for you.
{Technology Services}
Oracle Consulting Technology Services for Oracle Big Data Solutions are principally aimed to customers and partners who are after a product-oriented accelerator to quick ramp up Big Data technology skills and to have an initial understanding of concrete use cases for a specific Oracle product.
Rapid Start for Oracle NoSQL.It is a pre-packaged service based on Oracle NoSQL technology. Fast ingestion process, high scalability and availability, high performance concurrency and low latency response are key features of this technology. Oracle Consulting Rapid Start includes tangible use cases (based on real delivery projects) accompanied by leading practices of Oracle NoSQL implementations. Duration: from few days to 3 weeks.
Rapid Start for Oracle Real Time Decision.Fast data solutions and machine learning models are at the core of main Big Data Solutions. Oracle Real Time Decision offers self-adaptive learning that prescribes optimized recommendations. With the Oracle Consulting Rapid Start for Oracle Real time Decision the customer could easily fill the knowledge gap on the product and immediately increase its ROI.Duration: from few days to 3 weeks.
Rapid Start for Oracle Big Data SQL.
A new entry of the Oracle Big Data Enterprise Solution which leverages SQL queries to seamlessly and efficiently access data stored in Hadoop, relational databases, and NoSQL stores. Rapid Start for Oracle Big Data SQL unveils the potential of this new technology for your business intelligence strategy
Duration: from few days to 2 weeks.
Rapid Start for Oracle Big Data Connectors.This Rapid Start guides you through a step-by-step use case on how integrate a Hadoop HDSF cluster with an Oracle Database. Oracle experts provide tips and tricks on how to pass from a large and unstructured dataset to a structured dataset, ready for consumption. Design, configuration and run of Oracle Big Data Connectors (OBDC), ODI and other integration tools between Hadoop and Oracle database are activities enlisted in the catalogue of this service (the one to be delivered depending on the specific customer use case).
Duration: from few days to 2 weeks.
Rapid Start for Oracle Advanced Analytics.Empower your Data Analysts and Data Scientists with Oracle Data Mining and statistical algorithms (for example, Linear and Logistic Regression, Neural Networks, Time Series Analysis). By leveraging prior use cases in your industry, this service provides a step-by-step implementation of a statistical model with Oracle Data Mining (ODM) and Oracle R Enterprise (ORE).
Duration: from few days to 2 weeks.
Rapid Start for Oracle Endeca Information Discovery.This service proves the unbeatable value of having a complete Big Data solution in just one product, Oracle Endeca Information Discovery (EID). From the “Acquire” phase of unstructured data, that is sourced from different means (e.g. Social media, Sensor data), to the graphical visualization on dynamic dashboards (“Decide” phase), Oracle Consulting delivers a Use Case that guides and supports you with the Big Data challenge
Duration: from few days to 3 weeks.
------------------------------------------------------------------------------------------------------------------------------------
{Architectural Services}Oracle Consulting Architectural Services are generally designed to help customers in the early stage of their Big Data project or any other stage in which they want to deep dive business requirements and understand how to translate them into a Big Data design.
Innovation Workshops.A business led innovative approach to optimize your Big Data transformation journey, from qualification to go-live. Key Big Data concepts are instilled into business and technical users and then collected and harmonized within the “Divergent Thinking” phase. Ideas with recognized business value are promoted into requirements and subsequently into Big Data design decisions of the Big Data solution. This is the “Convergent Thinking” phase. Finally, “Implementation Iterations” allow iteratively reach the optimal solution. Duration: from 5 to 10 days (not consecutive), including backoffice work & final close with customer.
Information Management and Big Data MasterClass Workshops.The MasterClass provides an adaptable platform aimed at: highlighting Oracle’s thought leadership on Information Management and Big Data, explore aspects of the customers current state architecture and capabilities, develop a shared understanding among delegates in order to make progress. The workshop can vary in length and focus depending on the situation. Typically run as a whiteboard session (no PPT) and is product agnostic. In this way the workshop can be offered to customers who are not yet Oracle oriented. Duration: from 1 to 3 days, including backoffice work & final close with customer.
Analytical Capability Workshops.The workshop cover three main elements, the emphasis placed on each will vary depending on the customer situation and their current skills: (a) the data, process flow and analytical techniques required in order to drive business value based on specific use-cases. e.g. how you might increase product up-sell through customer segmentation; (b) how analytical capabilities might be enabled through the use of a Discovery Lab and what this entails; (c) other people, process and technology elements that must be considered in order to realise analytical capabilities and business value (e.g. current IT Architecture issues, current roadmap of the customer’s IT Architecture) Duration: from 1 to 3 days, including backoffice work & final close with customer.
Roadmap & Blueprint (Workshops).The Blueprint and Roadmap service delivers a series of detailed workshops to review customers use cases and Big Data requirement and map them to industry use cases. This packs analyses and supports the discussion with the customer around different scenarios of future-state architectures at different levels: from conceptual down to technical and infrastructure levels. One key aspect is the definition of the Data Governance and the end-to-end Big Data process flow (i.e. Acquire, Organize, Decide and Analyse). Finally, Oracle Consulting delivers the recommend Architecture Blueprint and Roadmap document to the customer, to assist its Big Data transformation journey. Duration: from 2 or 3 days to 5 weeks, depending upon the level of details for which the customer requires support in the Blueprint definition.
------------------------------------------------------------------------------------------------------------------------------------
{Solution Services}Oracle Consulting Solution Services are based on the expertise and leading practices hoarded by Oracle Consulting in several Big Data customer success stories. They provide for solutions and advisory services upon specific design patterns of a Big Data modernisation project.
Applications Store (Rapid Start Pack) for Oracle Big Data Appliance.If the customer is looking at the Oracle Big Data Appliance as a platform for different pre-packaged solutions from different partners, this advisory service exemplifies Oracle guidelines and leading practices to ensure maximise the Big Data Appliance’s value. It looks at the optimal deployment of different Big Data third-party solutions, advising on the compliance and adherence to the Oracle Big Data (Appliance) leading practices.
Duration: from few days to 2 weeks.
Data Reservoir Rapid Start Pack.Have customers ever wondered how to deal with the massive proliferation of new sources of digital information and the volume and velocity at which they are generated? Do they know a cost-effective manner to minimize the risk and maximize the value it provides? The Oracle Consulting Pack for Data Reservoir walks customers through the design, build and run of a solution which innovates your business harmonizes different storages for different data types (e.g. Hadoop HDFS, Oracle NoSQL, Oracle database and other databases), facilitate interaction of data provisioning and transformation tools (e.g. ETL) and set a structured Data Governance approach for your daily execution. Data Reservoir really empowers your business with an innovative platform that fosters new insights and new value out of your data.
Duration: 4 weeks.
Date Factory Engine Rapid Start Pack.The Rapid Start Pack for Data Factory Engine comes from a long Oracle Consulting experience with integration platforms for Big Data Solutions. By using a flexible metadata definition, the Data Factory engine deals with any type of data, from any source, with any volume and at any frequency. Data orchestration between the different components of your solution (e.g Data Discovery, Data Reservoir, Data Staging and Data Warehouse) is therefore simplified and controlled, to maximize ROI on your asset.Duration: 3 weeks.
Data Warehouse Offload Rapid Start Pack.The Rapid Start Pack for Data Warehouse offload uses an innovative approach to optimize your data warehouse, from profile to production. Profile workshops first help to understand your key pain points before carrying out the offloading process as a series of repeatable packages to optimize each workload. At the end of the Data Warehouse Offload pack implementation the customer will see a substantial gain in performance execution and maintenance efficiency joint to a cost-effective platform which is future-proof for any extension of the company information management strategy.
Duration: 4 weeks.
Discovery Lab Rapid Start Pack.Many key stakeholders have not yet understood the business value of an enterprise Big Data Solution. The Rapid Start Pack for Discovery Lab quickly empowers customer’s organization (e.g. analysts, data scientists, planners) with a comprehensive and agile Big Data Solution which deals with either structured, poly-structured and unstructured data. Oracle Consulting advises not just on the proper technologies that enables the Big Data (to be chosen among a portfolio of Oracle and non-Oracle Big Data products) but also on the discovery approach: Prototyping, Visualization, Bridging, Replication and Transformation.
Duration: from 2 to 4 weeks.
Fast Data Rapid Start Pack.Allowing on-the-fly fast analytics is a key element of any Big Data Solution; it gives new opportunities for data monetization of streaming information, a more proactive monitoring of customers behavior and real-time analysis of any core business processes in the company. The Rapid Start Pack for Fast Data advises on the best solution which fits customer’s needs, spanning across Oracle and non-Oracle technologies and leveraging some of the most relevant industry use cases. Duration: from 2 to 4 weeks.