This document is a tutorial on managing pluggable databases in Oracle 12c. It discusses how to rename, manage, and drop pluggable databases. It also covers security topics like common vs local users and roles, and how privileges are handled between the CDB root and pluggable databases. The tutorial demonstrates renaming a pluggable database called "TEST" to "new", managing tablespaces and datafiles between the root and pluggable databases, and creating both common and local users and roles.
This document provides a tutorial on managing pluggable databases in Oracle 12c. It discusses how to check the container name and ID, view pluggable databases, create a new pluggable database called TEST, open and close pluggable databases, and manage tablespaces within pluggable databases. The key steps covered are using SQL commands like SELECT, ALTER, and CREATE to work with the container database, pluggable databases, tablespaces, and data files.
This document discusses how to unplug and plug pluggable databases in Oracle 12c. It shows how to:
1. Unplug two pluggable databases named "new" and "new2" into XML files.
2. Drop the databases while keeping the datafiles.
3. Check for compatibility before plugging the databases into another container.
4. Plug the databases back into the container using the XML files with the "nocopy" and "copy" clauses.
The document discusses two options for achieving high availability for Oracle Database Standard Edition 2 (SE2):
1) Standard Edition High Availability (SEHA) provides an out-of-the-box failover cluster configuration using Oracle Grid Infrastructure that supports automatic failover between two nodes.
2) Using refreshable pluggable databases (PDBs) allows cloning a PDB from a primary database to a secondary database for read-only reporting or to refresh the secondary PDB periodically to propagate changes.
The document provides an agenda and overview for a hands-on workshop on Oracle 12c pluggable databases. The agenda includes topics on Oracle history, container databases, pluggable databases, new users and privileges in Oracle 12c, and several hands-on labs for activities like dropping/unplugging pluggable databases, plugging/cloning pluggable databases from remote container databases using database links, and moving a non-container database to a container database using Data Pump transportable export/import. Slides accompany the topics and provide additional technical details on concepts like container databases, pluggable databases, and the new user and role architecture in Oracle 12c.
Starting with 12c Release 1, Oracle introduced a completely new architecture concept for its database - the Container Database.
With this new architecture, new challenges came up but with the same breath a wide branch of new opportunities.
The presentation will address the capabilities to create fast and easy new (test) databases or clones for a running production database. Five different ways will be discussed.
- Using Local and Remote Cloning
- Using an Unplugged PDB (predefined master)
- Using Refreshable PDBs as a master for new (test) databases
- Snapshot Carousel
Another point of the agenda is the usage of the Snapshot features of ACFS and Direct NFS to speed up the creation process.
New Features for Multitenant in Oracle Database 21cMarkus Flechtner
Oracle Database 21c introduces several new features for multitenant databases:
- PDBs can now be upgraded automatically when plugged into a 21c CDB or opened, replaying the upgrade process.
- Resource management is improved with options like mandatory user profiles, per-PDB database resident connection pooling, and Oracle DB Nest for isolating PDBs using Linux namespaces and cgroups.
- Multitenant enhancements for high availability include PDBs being managed as cluster resources and improved PDB-level recovery when using Active Data Guard.
This document is a tutorial on managing pluggable databases in Oracle 12c. It discusses how to rename, manage, and drop pluggable databases. It also covers security topics like common vs local users and roles, and how privileges are handled between the CDB root and pluggable databases. The tutorial demonstrates renaming a pluggable database called "TEST" to "new", managing tablespaces and datafiles between the root and pluggable databases, and creating both common and local users and roles.
This document provides a tutorial on managing pluggable databases in Oracle 12c. It discusses how to check the container name and ID, view pluggable databases, create a new pluggable database called TEST, open and close pluggable databases, and manage tablespaces within pluggable databases. The key steps covered are using SQL commands like SELECT, ALTER, and CREATE to work with the container database, pluggable databases, tablespaces, and data files.
This document discusses how to unplug and plug pluggable databases in Oracle 12c. It shows how to:
1. Unplug two pluggable databases named "new" and "new2" into XML files.
2. Drop the databases while keeping the datafiles.
3. Check for compatibility before plugging the databases into another container.
4. Plug the databases back into the container using the XML files with the "nocopy" and "copy" clauses.
The document discusses two options for achieving high availability for Oracle Database Standard Edition 2 (SE2):
1) Standard Edition High Availability (SEHA) provides an out-of-the-box failover cluster configuration using Oracle Grid Infrastructure that supports automatic failover between two nodes.
2) Using refreshable pluggable databases (PDBs) allows cloning a PDB from a primary database to a secondary database for read-only reporting or to refresh the secondary PDB periodically to propagate changes.
The document provides an agenda and overview for a hands-on workshop on Oracle 12c pluggable databases. The agenda includes topics on Oracle history, container databases, pluggable databases, new users and privileges in Oracle 12c, and several hands-on labs for activities like dropping/unplugging pluggable databases, plugging/cloning pluggable databases from remote container databases using database links, and moving a non-container database to a container database using Data Pump transportable export/import. Slides accompany the topics and provide additional technical details on concepts like container databases, pluggable databases, and the new user and role architecture in Oracle 12c.
Starting with 12c Release 1, Oracle introduced a completely new architecture concept for its database - the Container Database.
With this new architecture, new challenges came up but with the same breath a wide branch of new opportunities.
The presentation will address the capabilities to create fast and easy new (test) databases or clones for a running production database. Five different ways will be discussed.
- Using Local and Remote Cloning
- Using an Unplugged PDB (predefined master)
- Using Refreshable PDBs as a master for new (test) databases
- Snapshot Carousel
Another point of the agenda is the usage of the Snapshot features of ACFS and Direct NFS to speed up the creation process.
New Features for Multitenant in Oracle Database 21cMarkus Flechtner
Oracle Database 21c introduces several new features for multitenant databases:
- PDBs can now be upgraded automatically when plugged into a 21c CDB or opened, replaying the upgrade process.
- Resource management is improved with options like mandatory user profiles, per-PDB database resident connection pooling, and Oracle DB Nest for isolating PDBs using Linux namespaces and cgroups.
- Multitenant enhancements for high availability include PDBs being managed as cluster resources and improved PDB-level recovery when using Active Data Guard.
This document provides an overview of Oracle 12c Pluggable Databases (PDBs). Key points include:
- PDBs allow multiple databases to be consolidated within a single container database (CDB), providing benefits like faster provisioning and upgrades by doing them once per CDB.
- Each PDB acts as an independent database with its own data dictionary but shares resources like redo logs at the CDB level. PDBs can be unplugged from one CDB and plugged into another.
- Hands-on labs demonstrate how to create, open, clone, and migrate PDBs between CDBs. The document also compares characteristics of CDBs and PDBs and shows how a non-C
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
This document discusses the creation of a multitenant container database (CDB) and pluggable databases (PDBs) in Oracle Database 12c. It covers creating a CDB using Oracle Universal Installer, Database Configuration Assistant, or manually. The manual process involves setting enable_pluggable_database to true, adding clauses to the CREATE DATABASE command, and running a script that creates the root and seed PDBs. The document also provides commands to validate if a database is a CDB and view its containers.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New features allow restricting access and limiting resources on a per-PDB basis through lockdown profiles and I/O/memory limits.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New resource management features allow limiting I/O rates, configuring memory usage, and assigning performance profiles for PDBs.
- PDB lockdown profiles provide a way to restrict features and operations on a per-PDB basis.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
IOUG Collaborate 2015 - PDB Cloning Using SQL CommandsLeighton Nelson
This document provides an overview of cloning pluggable databases (PDBs) in Oracle using SQL commands. It discusses Oracle multi-tenant architecture and the different options for cloning PDBs, including from another PDB, a non-CDB, or a remote source. The bulk of the document demonstrates how to clone a local PDB from a source PDB using SQL commands, with examples of different cloning options and parameters. It also shows how to clone a PDB using a snapshot copy for near-instantaneous space-efficient copies.
Schema replication using oracle golden gate 12cuzzal basak
This document provides instructions for configuring asynchronous schema replication between an Oracle source database and target database using Oracle GoldenGate 12c. It outlines the necessary steps which include:
1. Enabling supplemental logging and archivelog mode on both databases.
2. Installing the GoldenGate software and starting the Manager processes on both systems.
3. Configuring the Extract, Data Pump, and Replicate processes to replicate the BASAK schema and tables from the source PDBORCL to the target PRIPDB database.
4. Starting the Extract, Data Pump, and Replicate jobs to begin the replication process and ensure the BASAK schema and tables are synchronized between the source and target databases.
This document discusses Oracle Multitenant 19c and pluggable databases. It begins with an introduction to the speaker and overview of pluggable databases. It then describes the traditional Oracle database architecture and the multitenant architecture in Oracle 19c. It discusses the different components of a container database including the root, seed PDB, and application containers. It also covers how to create pluggable databases from scratch, through cloning locally and remotely, relocating PDBs, and plugging in unplugged PDBs.
Red Stack Tech Ltd is a global Oracle Technology brand specialising in the provision of Oracle software, Hardware, Managed and professional services across the entire Oracle Technology stack. Established in the mid 90’s, Red Stack Tech have developed through R&D and investment in new technologies, a brand which is highly regarded within the Oracle landscape. Red Stack Tech are able to deliver full end-to-end solutions that encompass all Oracle technologies with a strong focus on Oracle Engineered Systems, Database Management Services and Business Analytics.
Setup oracle golden gate 11g replicationKanwar Batra
How to setup Oracle Goldengate Replication between 11gR2 RAC or Single node instances. For RAC setup the GoldenGate custom cluster service . Not part of this document
The document discusses new security concepts introduced in Oracle Multitenant. Key points include:
- Common users exist across all pluggable databases in a container database while local users are specific to a single pluggable database.
- Common users are created in the root container and can access all pluggable databases while local users are limited to a single database.
- The set container privilege allows common users to switch between pluggable databases without reconnecting. This privilege needs to be granted carefully.
- Data dictionary and performance views aggregate information across all pluggable databases when queried from the root container.
This document discusses upgrading to Oracle Database 19c and migrating to Oracle Multitenant. It provides an overview of key features such as being able to have 3 user-created PDBs without a Multitenant license in 19c. It also demonstrates how to use AutoUpgrade to perform an upgrade and migration to Multitenant with a single command. The document highlights various Multitenant concepts such as resource sharing, connecting to containers, and cloning PDBs.
This document discusses migrating databases to Oracle's multitenant architecture. It begins with an overview of using AutoUpgrade to upgrade databases and then plugging them into a container database (CDB). It also covers concepts of Oracle Multitenant like pluggable databases (PDBs), resource sharing, and connecting to different containers. The document provides guidance on tasks like cloning PDBs, upgrading within a CDB, and migrating non-CDBs to PDBs.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
OOW 17 - database consolidation using the oracle multitenant architecturePini Dibask
This document discusses database consolidation using Oracle Multitenant. It begins with an introduction to multitenant architecture and concepts. It then covers ensuring quality of service in multitenant environments using Oracle Resource Manager. The document also discusses using RAC with multitenant databases and performance monitoring for multitenant environments.
This document provides an overview of Oracle 12c Pluggable Databases (PDBs). Key points include:
- PDBs allow multiple databases to be consolidated within a single container database (CDB), providing benefits like faster provisioning and upgrades by doing them once per CDB.
- Each PDB acts as an independent database with its own data dictionary but shares resources like redo logs at the CDB level. PDBs can be unplugged from one CDB and plugged into another.
- Hands-on labs demonstrate how to create, open, clone, and migrate PDBs between CDBs. The document also compares characteristics of CDBs and PDBs and shows how a non-C
12cR2 Single-Tenant: Multitenant Features for All EditionsFranck Pachot
Multitenant architecture is available even without Oracle's multitenant option. In this session take a look at the overhead and the 12.2 new features so that you can choose among single-tenant or non-container databases. These features include agility in data movement, easy flashback, and fast upgrade.
This document discusses the creation of a multitenant container database (CDB) and pluggable databases (PDBs) in Oracle Database 12c. It covers creating a CDB using Oracle Universal Installer, Database Configuration Assistant, or manually. The manual process involves setting enable_pluggable_database to true, adding clauses to the CREATE DATABASE command, and running a script that creates the root and seed PDBs. The document also provides commands to validate if a database is a CDB and view its containers.
Oracle Database 12.1.0.2 introduced several new features including approximate count distinct, full database caching, pluggable database (PDB) improvements like cloning and state management, JSON support, data redaction, SQL query row limits and offsets, invisible columns, SQL text expansion, calling PL/SQL from SQL, session level sequences, and extended data types support.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New features allow restricting access and limiting resources on a per-PDB basis through lockdown profiles and I/O/memory limits.
The document discusses new features in Oracle Database 12c Release 2 related to Oracle Multitenant architecture. Key points include:
- PDBs can now have local undo tablespaces for improved flashback and other features.
- PDBs can be plugged/unplugged into archive files, cloned with hot cloning, refreshed periodically, and relocated between CDBs.
- New resource management features allow limiting I/O rates, configuring memory usage, and assigning performance profiles for PDBs.
- PDB lockdown profiles provide a way to restrict features and operations on a per-PDB basis.
The document discusses setting up an Oracle 12c Active Data Guard physical standby database using RMAN DUPLICATE FROM ACTIVE. It involves 3 steps:
1) Configuring the primary and standby databases, including creating required directories, adding static entries to listener.ora, and editing tnsnames.ora.
2) Running RMAN DUPLICATE FROM ACTIVE on the primary to create the standby database instance while it is in NOMOUNT mode.
3) After duplicate completes, configuring redo transport on both primary and standby, adding standby redo logs, and opening the standby database to start managed recovery.
IOUG Collaborate 2015 - PDB Cloning Using SQL CommandsLeighton Nelson
This document provides an overview of cloning pluggable databases (PDBs) in Oracle using SQL commands. It discusses Oracle multi-tenant architecture and the different options for cloning PDBs, including from another PDB, a non-CDB, or a remote source. The bulk of the document demonstrates how to clone a local PDB from a source PDB using SQL commands, with examples of different cloning options and parameters. It also shows how to clone a PDB using a snapshot copy for near-instantaneous space-efficient copies.
Schema replication using oracle golden gate 12cuzzal basak
This document provides instructions for configuring asynchronous schema replication between an Oracle source database and target database using Oracle GoldenGate 12c. It outlines the necessary steps which include:
1. Enabling supplemental logging and archivelog mode on both databases.
2. Installing the GoldenGate software and starting the Manager processes on both systems.
3. Configuring the Extract, Data Pump, and Replicate processes to replicate the BASAK schema and tables from the source PDBORCL to the target PRIPDB database.
4. Starting the Extract, Data Pump, and Replicate jobs to begin the replication process and ensure the BASAK schema and tables are synchronized between the source and target databases.
This document discusses Oracle Multitenant 19c and pluggable databases. It begins with an introduction to the speaker and overview of pluggable databases. It then describes the traditional Oracle database architecture and the multitenant architecture in Oracle 19c. It discusses the different components of a container database including the root, seed PDB, and application containers. It also covers how to create pluggable databases from scratch, through cloning locally and remotely, relocating PDBs, and plugging in unplugged PDBs.
Red Stack Tech Ltd is a global Oracle Technology brand specialising in the provision of Oracle software, Hardware, Managed and professional services across the entire Oracle Technology stack. Established in the mid 90’s, Red Stack Tech have developed through R&D and investment in new technologies, a brand which is highly regarded within the Oracle landscape. Red Stack Tech are able to deliver full end-to-end solutions that encompass all Oracle technologies with a strong focus on Oracle Engineered Systems, Database Management Services and Business Analytics.
Setup oracle golden gate 11g replicationKanwar Batra
How to setup Oracle Goldengate Replication between 11gR2 RAC or Single node instances. For RAC setup the GoldenGate custom cluster service . Not part of this document
The document discusses new security concepts introduced in Oracle Multitenant. Key points include:
- Common users exist across all pluggable databases in a container database while local users are specific to a single pluggable database.
- Common users are created in the root container and can access all pluggable databases while local users are limited to a single database.
- The set container privilege allows common users to switch between pluggable databases without reconnecting. This privilege needs to be granted carefully.
- Data dictionary and performance views aggregate information across all pluggable databases when queried from the root container.
This document discusses upgrading to Oracle Database 19c and migrating to Oracle Multitenant. It provides an overview of key features such as being able to have 3 user-created PDBs without a Multitenant license in 19c. It also demonstrates how to use AutoUpgrade to perform an upgrade and migration to Multitenant with a single command. The document highlights various Multitenant concepts such as resource sharing, connecting to containers, and cloning PDBs.
This document discusses migrating databases to Oracle's multitenant architecture. It begins with an overview of using AutoUpgrade to upgrade databases and then plugging them into a container database (CDB). It also covers concepts of Oracle Multitenant like pluggable databases (PDBs), resource sharing, and connecting to different containers. The document provides guidance on tasks like cloning PDBs, upgrading within a CDB, and migrating non-CDBs to PDBs.
T3 is an optimized protocol used to transport data between WebLogic Server and other Java programs. WebLogic Server tracks each Java Virtual Machine (JVM) it connects to and creates a single T3 connection to carry all traffic for a JVM. For example, if a client accesses an enterprise bean and JDBC connection pool on WebLogic Server, a single network connection is established between the WebLogic Server JVM and the client JVM.
OOW 17 - database consolidation using the oracle multitenant architecturePini Dibask
This document discusses database consolidation using Oracle Multitenant. It begins with an introduction to multitenant architecture and concepts. It then covers ensuring quality of service in multitenant environments using Oracle Resource Manager. The document also discusses using RAC with multitenant databases and performance monitoring for multitenant environments.
Similaire à Clone_a_remote_PDB_in_Data_Guard_Environments_19c_1698741799.pdf (20)
1) The document discusses Oracle ASM Filter Driver (ASMFD), ASMLIB, and how they relate to managing I/O for Oracle databases on Linux. ASMFD replaces ASMLIB, providing persistent device naming and preventing accidental overwrites of Oracle disks.
2) It provides information on when and how to use ASM with and without ASMLIB, alternatives to each, and how to configure Oracle single-instance and RAC databases with and without ASM and ASMLIB. Configuration without these components can use filesystems, LVM, or third-party cluster file systems instead.
Recovering a Oracle datafile without backup.pdfAlireza Kamrani
This document describes how to recover an Oracle database file without a backup by:
1. Creating an empty file with the same size as the damaged file using ALTER DATABASE.
2. Performing media recovery on the empty file to apply archived redo logs and restore the data.
3. After recovery, the database can be opened with a resetlogs.
♨️How To Use DataPump (EXPDP) To Export From Physical Standby….pdfAlireza Kamrani
This document provides steps to successfully export data from a physical standby database using Data Pump Export (EXPDP). It explains that EXPDP cannot be run directly on the physical standby due to its read-only status, so a database link must be used to connect from a non-standby database. The physical standby must be opened in read-only mode before exporting. Example commands are given to create a database link, open the physical standby read-only, and run EXPDP with the NETWORK_LINK parameter to export the data. Common errors that can occur without using these steps are also described.
♨️CPU limitation per Oracle database instanceAlireza Kamrani
Cgroups improve database performance by associating a dedicated set of CPUs and memory to a database instance, limiting each instance to only those resources. The setup_processor_group.sh script is used to create cgroups on Linux systems. To bind a database instance to a cgroup, the PROCESSOR_GROUP_NAME parameter must be set to the cgroup name and the instance restarted. Best practices include configuring cgroups out of CPU threads from minimum cores/sockets and creating cgroups with at least 2 CPU cores.
Out-of-Place Oracle Database Patching and Provisioning Golden ImagesAlireza Kamrani
Out-of-place Oracle database patching involves creating a new Oracle Home, applying patches to it, and updating the Oracle Inventory. Golden images can then be created by cloning an existing Oracle Home or Grid Home. Additional Oracle features can be provisioned using the -apply_ru option after applying patches to the golden image. These techniques help minimize downtime and maintain consistency when upgrading Oracle databases.
IO Schedulers (Elevater) concept and its affection on database performanceAlireza Kamrani
I/O schedulers in Linux reorder and group I/O requests to improve throughput while balancing latency. Different schedulers take different approaches, and there is no single best scheduler for all situations. For Oracle databases on Linux, Oracle recommends using the Deadline scheduler for HDD storage to prioritize I/O requests, while the none scheduler may be best for SSD/NVMe storage. When selecting a scheduler, it is important to consider the storage media and I/O characteristics of the workload.
The Fundamental Characteristics of Storage concepts for DBAsAlireza Kamrani
The document discusses key storage concepts for database administrators including latency, IOPS, and bandwidth. Latency refers to the delay in a storage system's response to an I/O request, typically measured in milliseconds for disk and microseconds for flash. IOPS represents the number of input/output operations per second a storage system can support. Bandwidth refers to the amount of data that can be transferred per second, measured in megabytes or gigabytes per second. These concepts are related, as IOPS and latency increase as storage systems approach maximum throughput. It is important for DBAs to understand how applications will impact I/O patterns in terms of these concepts when choosing appropriate storage solutions.
What is Scalability and How can affect on overall system performance of databaseAlireza Kamrani
Scalability refers to a system's ability to handle increased workload by proportionally increasing resource usage. Poor scalability can occur due to resource conflicts like locking, consistency work, I/O, or queries that don't scale well. Systems become unscalable if a resource is exhausted, limiting throughput and response times. There are two types of scaling: vertical involves more powerful hardware, while horizontal adds more nodes without changing individual nodes. Sharding distributes data across partitions to improve performance and storage limits by scaling out horizontally.
🏗️Improve database performance with connection pooling and load balancing tec...Alireza Kamrani
This document discusses improving database performance through connection pooling and load balancing. It describes how connection pooling reuses database connections to optimize performance as traffic and clients grow. It then summarizes several Oracle and MySQL/MariaDB solutions for connection pooling and load balancing, including Oracle Traffic Director, Oracle Connection Manager, MariaDB MaxScale, and ProxySQL. These solutions can distribute database requests, provide high availability, and monitor performance.
Lock-free reservations is a new Oracle Database 23c feature that allows concurrent transactions to modify the same data without blocking each other. It does this by reserving rows for updates rather than locking them, and verifies that the updates can succeed at commit time. This improves concurrency and user experience over traditional locking. Lock-free reservations can be used for applications that manage shared resources like tickets, seats, or balances to allow high concurrency without sessions hanging. It works by having the database track reservations in a temporary journal table rather than locking data.
Store non-structured data in JSON column types and enhancements of JSONAlireza Kamrani
The document discusses features and enhancements for working with JSON data in Oracle databases. It describes how JSON can be stored in a JSON column type or text-based columns like CLOB. Validation functions are introduced in Oracle 23c to check JSON data for compatibility with the JSON type and validate against schemas. The document provides examples of converting text-based JSON columns to the JSON type and using the new validation functions.
Enhancements in Oracle 23c Introducing the NewOld Returning ClauseAlireza Kamrani
The document discusses Oracle 23c's enhanced Returning Clause feature for DML statements. Key points:
1) The Returning Clause now allows developers to retrieve both new and old column values for UPDATE statements, making it consistent across DML types. Previously only new values could be easily retrieved.
2) For INSERT statements, only new column values are returned, while DELETE statements return only old column values.
3) This feature streamlines application development by simplifying how developers retrieve pre- and post-operation data values from DML statements.
PostgreSQL and Oracle are both relational database management systems, but they differ in several key areas. PostgreSQL uses CTIDs to track rows, while Oracle uses ROWIDs. PostgreSQL uses MVCC for concurrency control and automatic vacuuming for disk space management, while Oracle uses undo space management. PostgreSQL configuration files include postgresql.conf and pg_hba.conf, analogous to Oracle's spfile and listener configuration files. Some other differences include how each handles write-ahead logs, database objects, backup and restore, and system monitoring. While not identical, both aim to provide robust relational database functionality.
Open Source Contributions to Postgres: The Basics POSETTE 2024ElizabethGarrettChri
Postgres is the most advanced open-source database in the world and it's supported by a community, not a single company. So how does this work? How does code actually get into Postgres? I recently had a patch submitted and committed and I want to share what I learned in that process. I’ll give you an overview of Postgres versions and how the underlying project codebase functions. I’ll also show you the process for submitting a patch and getting that tested and committed.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
The Ipsos - AI - Monitor 2024 Report.pdfSocial Samosa
According to Ipsos AI Monitor's 2024 report, 65% Indians said that products and services using AI have profoundly changed their daily life in the past 3-5 years.
4th Modern Marketing Reckoner by MMA Global India & Group M: 60+ experts on W...Social Samosa
The Modern Marketing Reckoner (MMR) is a comprehensive resource packed with POVs from 60+ industry leaders on how AI is transforming the 4 key pillars of marketing – product, place, price and promotions.
Introduction to Jio Cinema**:
- Brief overview of Jio Cinema as a streaming platform.
- Its significance in the Indian market.
- Introduction to retention and engagement strategies in the streaming industry.
2. **Understanding Retention and Engagement**:
- Define retention and engagement in the context of streaming platforms.
- Importance of retaining users in a competitive market.
- Key metrics used to measure retention and engagement.
3. **Jio Cinema's Content Strategy**:
- Analysis of the content library offered by Jio Cinema.
- Focus on exclusive content, originals, and partnerships.
- Catering to diverse audience preferences (regional, genre-specific, etc.).
- User-generated content and interactive features.
4. **Personalization and Recommendation Algorithms**:
- How Jio Cinema leverages user data for personalized recommendations.
- Algorithmic strategies for suggesting content based on user preferences, viewing history, and behavior.
- Dynamic content curation to keep users engaged.
5. **User Experience and Interface Design**:
- Evaluation of Jio Cinema's user interface (UI) and user experience (UX).
- Accessibility features and device compatibility.
- Seamless navigation and search functionality.
- Integration with other Jio services.
6. **Community Building and Social Features**:
- Strategies for fostering a sense of community among users.
- User reviews, ratings, and comments.
- Social sharing and engagement features.
- Interactive events and campaigns.
7. **Retention through Loyalty Programs and Incentives**:
- Overview of loyalty programs and rewards offered by Jio Cinema.
- Subscription plans and benefits.
- Promotional offers, discounts, and partnerships.
- Gamification elements to encourage continued usage.
8. **Customer Support and Feedback Mechanisms**:
- Analysis of Jio Cinema's customer support infrastructure.
- Channels for user feedback and suggestions.
- Handling of user complaints and queries.
- Continuous improvement based on user feedback.
9. **Multichannel Engagement Strategies**:
- Utilization of multiple channels for user engagement (email, push notifications, SMS, etc.).
- Targeted marketing campaigns and promotions.
- Cross-promotion with other Jio services and partnerships.
- Integration with social media platforms.
10. **Data Analytics and Iterative Improvement**:
- Role of data analytics in understanding user behavior and preferences.
- A/B testing and experimentation to optimize engagement strategies.
- Iterative improvement based on data-driven insights.
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
End-to-end pipeline agility - Berlin Buzzwords 2024Lars Albertsson
We describe how we achieve high change agility in data engineering by eliminating the fear of breaking downstream data pipelines through end-to-end pipeline testing, and by using schema metaprogramming to safely eliminate boilerplate involved in changes that affect whole pipelines.
A quick poll on agility in changing pipelines from end to end indicated a huge span in capabilities. For the question "How long time does it take for all downstream pipelines to be adapted to an upstream change," the median response was 6 months, but some respondents could do it in less than a day. When quantitative data engineering differences between the best and worst are measured, the span is often 100x-1000x, sometimes even more.
A long time ago, we suffered at Spotify from fear of changing pipelines due to not knowing what the impact might be downstream. We made plans for a technical solution to test pipelines end-to-end to mitigate that fear, but the effort failed for cultural reasons. We eventually solved this challenge, but in a different context. In this presentation we will describe how we test full pipelines effectively by manipulating workflow orchestration, which enables us to make changes in pipelines without fear of breaking downstream.
Making schema changes that affect many jobs also involves a lot of toil and boilerplate. Using schema-on-read mitigates some of it, but has drawbacks since it makes it more difficult to detect errors early. We will describe how we have rejected this tradeoff by applying schema metaprogramming, eliminating boilerplate but keeping the protection of static typing, thereby further improving agility to quickly modify data pipelines without fear.
1. Hot Clone a remote PDB in Data Guard Environments using Transient no-standby PDBs
Introduction
Oracle Multitenant is integrated with Oracle Data Guard. Data Guard is configured at the CDB level and will
replicate all transactions from the primary to the standby for all PDBs in a single stream, including creating
and deleting PDBs. But!
When we create a new PDB, which is a clone from PDB$SEED, or create a local clone, the data files of the
source PDB are already present on the standby site, and hence, the operation replays successfully on the
standby database too.
However, when creating a remote clone, the source data files are only present on the remote CDB but not
on the standby site. The remote clone operation will succeed on the primary but leave you with no data files
on standby.
In this blog post, we will discuss a solution using “transient no-standby PDB” to overcome this challenge.
The Environment
We will use the following:
• Source: Single Instance Database version 19.12, using Multitenant, TDE encrypted, and WE8CED
character set. Database unique name is CDBWE8_fra3s6.
• Target: RAC Database version 19.12 in Active Data Guard configuration, using Multitenant, TDE
encrypted, and AL32UTF8 character set.
• Primary database unique name is RACCDB_AD1.
• Standby database unique name is RACCDB_fra2xr.
The same procedure applies to single-instance and RAC databases as source or target.
The target database doesn’t need to use Active Data Gaurd either. The same applies to Data Guard.
However, the CDB$ROOT needs to be open in read-only mode on standby. In Oracle database version 19c,
the Active Data Guard option license is not required when only the CDB$ROOT is open read-only on
standby.
Oracle Database version 21c
In 21c, a new feature called PDB Recovery Isolation was introduced. With that, a hot clone operation does
not require any further manual steps.
PDB Recovery Isolation requires the Active Data Guard option.
On the Source Database
Step 1: Prepare the C##SYSOPER user on the Source Database
Grant the C##SYSOPER user the required privileges to be used for the database link:
SQL> grant create session, sysoper to C##SYSOPER identified by WElcome123## container=all;
Grant succeeded.
On the Target Primary Database
Step 2: Create Database Link from Target to Source
Create a database link on the target primary database pointing to the source database:
SQL> create database link DBLINK_TO_WE8 connect to C##SYSOPER identified by WElcome123## using
'<connection_string_to_source_database>';
Database link created.
As we are not using a database link with the same name as the database it connects to, we need to
additionally set the global_names parameter to false:
SQL> alter system set global_names=false scope=both sid='*';
System altered.
Step 3: Create a Remote Hot Clone PDB from the Source Database
Use the database link from the previous step to create a hot clone from the source PDB, however providing
the STANDBYS=NONE clause, so the clone PDB will only exist on the primary database, but not on the
standby. This is not our final clone, but just a “transient” one. This is why it’s called “transient no-standby
PDB“.
2. SQL> create pluggable database TRANSPDB from PDBWE8@DBLINK_TO_WE8 keystore identified by
"WElcome123##" STANDBYS=NONE;
Pluggable database created.
SQL> alter pluggable database TRANSPDB open instances=all;
Pluggable database altered.
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ----------------- ----------
2 PDB$SEED READ ONLY NO
3 RACPDB READ WRITE NO
4 TRANSPDB READ WRITE NO
Now, if you check the standby site, you’ll see the transient PDB in “MOUNTED” mode, but there are no data
files associated with it:
-- on the standby
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ---------- ---------------
2 PDB$SEED READ ONLY NO
3 RACPDB READ ONLY NO
4 TRANSPDB MOUNTED
SQL> select name from v$datafile where con_id=4;
NAME
--------------------------------------------------------------
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00044
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00045
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00046
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00047
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00048
/u01/app/oracle/product/19.0.0.0/dbhome_1/dbs/UNNAMED00049
Step 4: Create a self-referencing Database Link on the Primary Database
On the target primary, grant the C##SYSOPER user the required privileges and create a database link using
the connection string of the primary itself:
SQL> grant create session, sysoper to C##SYSOPER identified by WElcome123## container=all;
Grant succeeded.
SQL> create database link DBLINK_TO_PRIM connect to C##SYSOPER identified by WElcome123##
using '<connection_string_of_primary>';
Database link created.
This operation will be replicated via Data Guard redo-apply on the standby database, so we have a
database link on the standby pointing to the primary. This IS COOL!
As the database link on standby is not using the exact name of the database is connecting to, we need to
set global_names in the standby database to false:
-- on standby
SQL> alter system set global_names=false scope=both sid='*';
3. System altered.
On the Standby Database
Step 5: Set standby_pdb_source_file_dblink on Standby
On the standby database, set the standby_pdb_source_file_dblink parameter to the name of the database
link created in step 4:
SQL> alter system set standby_pdb_source_file_dblink=DBLINK_TO_PRIM scope=both sid='*';
System altered.
This parameter specifies the name of the database link that will automatically be used in the next step to
copy the data files from the primary to standby during a local clone operation.
Back to the Primary Database
Step 6: Create a Local Cold Clone from the Transient PDB
On the primary database, set the transient PDB in read-only mode, and create a clone of it, this time using
STANDBYS=ALL:
SQL> alter pluggable database TRANSPDB close immediate instances=all;
Pluggable database altered.
SQL> alter pluggable database TRANSPDB open read only instances=all;
Pluggable database altered.
SQL> create pluggable database FINALPDB from TRANSPDB keystore identified by "WElcome123##"
STANDBYS=ALL;
Pluggable database created.
-- open thenew PDB
SQL> alter pluggable database FINALPDB open instances=all;
Pluggable database altered.
-- on the standby
SQL> show pdbs;
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 RACPDB READ ONLY NO
4 TRANSPDB MOUNTED
5 FINALPDB MOUNTED
SQL> select name from v$datafile where con_id=5;
NAME
-----------------------------------------------------------------------------------------
+DATA/RACCDB_FRA2XR/C9A7E2196EE27681E0531F02640A18C1/DATAFILE/system.280.1089037503
+DATA/RACCDB_FRA2XR/D125EC846C2D2376E053A800000A3055/DATAFILE/sysaux.281.1089037509
+DATA/RACCDB_FRA2XR/D125EC846C2D2376E053A800000A3055/DATAFILE/undotbs1.282.1089037513
+DATA/RACCDB_FRA2XR/D125EC846C2D2376E053A800000A3055/DATAFILE/users.283.1089037515
+DATA/RACCDB_FRA2XR/D125EC846C2D2376E053A800000A3055/DATAFILE/we8tbs.284.1089037515
+DATA/RACCDB_FRA2XR/D125EC846C2D2376E053A800000A3055/DATAFILE/undo_2.285.1089037517
4. This time, the new PDB clone is created with all data files as on the primary, as the data files have been
copied over the database link created in step 4. This is our final PDB.
However, suppose you use local software wallets to manage the TDE master encryption keys instead of
Oracle Key Vault (OKV) or OCI Vault in Oracle Cloud. In that case, the Data Guard Redo Apply will stop
after opening the PDB, as the standby does not have access to the TDE key of the newly cloned PDB:
dgmgrl
DGMGRL> connect sys;
DGMGRL> show configuration;
Configuration - RACCDB_AD1_RACCDB_fra2xr
Protection Mode: MaxPerformance
Members:
RACCDB_AD1 - Primary database
RACCDB_fra2xr - Physical standby database
Error: ORA-16810: multiple errors or warnings detected for the member
Fast-Start Failover: Disabled
Configuration Status:
ERROR (status updated 37 seconds ago)
If your source PDB did not have a TDE key, then Redo Apply will stop once you create a new one as
described in the next step.
Step 7: Create a new TDE Key for the Final PDB
After cloning an encrypted PDB, do not continue using the same key of the source PDB, but create a new
TDE master encryption key for the newly cloned PDB instead:
SQL> alter session set container=FINALPDB;
Session altered.
SQL> administer key management set key force keystore identified by WElcome123## with backup;
keystore altered.
Step 8: Copy the TDE Wallet Files to the Standby Site
Copy the TDE wallet files from the primary to the standby server:
scp -p /opt/oracle/dcs/commonstore/wallets/tde/RACCDB_AD1/*wallet.*
oracle@<standby_server>:/opt/oracle/dcs/commonstore/wallets/tde/RACCDB_fra2xr/
cwallet.sso 100% 9432 5.8MB/s 00:00
ewallet.p12 100% 9387 8.3MB/s 00:00
For the new keys to get recognized by the standby database, we need to close and re-open the key store on
the standby database:
-- on the standby
SQL> administer key management set keystore close container=ALL;
keystore altered.
-- key store will open automatically as soon as we query the v$encryption_wallet view
SQl> select p.con_id, p.name, p.open_mode, ew.wrl_type, ew.wallet_type, ew.status
from v$pdbs p join v$encryption_wallet ew on (ew.con_id = p.con_id);
CON_ID NAME OPEN_MODE WRL_TYPE WALLET_TYPE STATUS
---------- ------------------------------ -------------------- -------------------- -------------------- --------------------
2 PDB$SEED READ ONLY FILE AUTOLOGIN OPEN
3 RACPDB READ ONLY FILE AUTOLOGIN OPEN
5. 4 TRANSPDB MOUNTED FILE AUTOLOGIN OPEN
5 FINALPDB MOUNTED FILE AUTOLOGIN OPEN
Step 9: Start the Redo Apply
Now, the Redo Apply process can be started again:
DGMGRL> edit database RACCDB_fra2xr set state = 'apply-on';
-- wait a few seconds
DGMGRL> show configuration
Configuration - RACCDB_AD1_RACCDB_fra2xr
Protection Mode: MaxPerformance
Members:
RACCDB_AD1 - Primary database
RACCDB_fra2xr - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 71 seconds ago)
Step 10: Drop the Transient PDB
Finally, you may want to drop the transient PDB. On the primary database:
SQL> alter pluggable database TRANSPDB close immediate instances=all;
Pluggable database altered.
SQL> drop pluggable database TRANSPDB including datafiles;
Pluggable database dropped.
Drop the database links if not needed anymore.
Considerations
Let’s consider the storage and time needed for this scenario:
• Storage: we need 2x the size of the PDB to be cloned on the primary database, once for the transient no-
standby PDB, and once for the final PDB. This is only used temporarily until the transient no-standby PDB is
deleted. However, this might become challenging when cloning huge PDBs.
• Time: we need to execute two cloning operations, one remote, and one local. The time needed mainly
depends on database size, degree of parallelism, and the network throughput between the source and
primary for the first clone, and primary and standby for the second clone. However, the local clone is a cold
clone that just copies the datafiles without the need for any recovery operations. Therefore, the cold clone
should be much faster than the hot clone, especially if the source was a transactional busy database.
If you want to avoid the second local clone, you could continue recovering the transient no-standby PDB on
the standby site .
Conclusion
In Data Guard environments, creating a new PDB or cloning local PDB replays successfully on the standby,
as the data files of the source are present on the standby site too. This is obviously not the case for remote
cloning. To copy the datafiles to the standby, you have to:
1. Create a transient no-standby PDB first, then create a local clone of that PDB while using a database link
from standby to primary (discussed here), or
2. Recover the no-standby PDB on the standby site.
The end.