Feed aggregator

Schema vs User

Tom Kyte - 12 hours 51 min ago
Hi Tom, When an user is created using "create user" it will create a user as well as a schema with the same name as user. Also, there is a way to create only schema as well, using "create schema". But how do I create a user without and associated schema. Like I want to create a user "user1_readonly", and give select only permission to all objects in "user1" schema. ("user1" schema got created when "user1" user was created) I've seen in our environment that there 5 schemas, and more than 50 users. But don't know how those 50-5 = 45 users were created. Thanks! Karteek
Categories: DBA Blogs

Turning Column Into Rows

Tom Kyte - 12 hours 51 min ago
Hi Tom, I have the following situation. Please see the SQL's and DDL statements. The question is: instead of 2 "INSERT INTO...SELECT" statements, is there any way I can do it in one shot? SQL> CREATE TABLE x ( 2 country VARCHAR2(30), 3 state VARCHAR2(10) 4 ); Table created. SQL> SQL> INSERT INTO x VALUES ('US', 'CA'); 1 row created. SQL> INSERT INTO x VALUES ('US', 'MA'); 1 row created. SQL> INSERT INTO x VALUES ('US', 'HI'); 1 row created. SQL> INSERT INTO x VALUES ('US', 'TX'); 1 row created. SQL> SQL> commit; Commit complete. SQL> SQL> CREATE TABLE y ( 2 attribute VARCHAR2(30), 3 attribute_value VARCHAR2(30) 4 ); Table created. SQL> INSERT INTO y 2 SELECT DISTINCT country 3 FROM x; INSERT INTO y * ERROR at line 1: ORA-00947: not enough values SQL> INSERT INTO y 2 SELECT DISTINCT 'Country', country 3 FROM x; 1 row created. SQL> select * from y; ATTRIBUTE ATTRIBUTE_VALUE ------------------------------ --------------- Country US SQL> INSERT INTO y 2 SELECT DISTINCT 'State', state 3 FROM x; 4 rows created. SQL> commit; Commit complete. SQL> select * from y; ATTRIBUTE ATTRIBUTE_VALUE ------------------------------ --------------- Country US State CA State HI State MA State TX thanks - Su Baba
Categories: DBA Blogs

Securing the CICD Pipeline in Cloud

Pakistan's First Oracle Blog - Wed, 2022-12-07 16:15

 CICD has become ubiquitous in almost every organization in one form or another to release code. This code could be application related or could be for provisioning the infrastructure. 

Securing the CICD pipeline in cloud sprawl shouldn't be an afterthought. There are numerous threat vectors which could compromise a CICD pipeline. These threat vectors range from liberal IAM policies to overlooked auto-merge feature and from ignored build process to the ungoverned third party packages usage.

Here are Top 10 CI/CD Security Risks from Cider.


Cider says, that this document helps defenders identify focus areas for securing their CI/CD ecosystem. It is the result of extensive research into attack vectors associated with CI/CD, and the analysis of high profile breaches and security flaws. Numerous industry experts across multiple verticals and disciplines came together to collaborate on this document to ensure its relevance to today’s threat landscape, risk surface, and the challenges that defenders face in dealing with these risks.


Categories: DBA Blogs

ALTER TABLE NOCOMPRESS

Dominic Brooks - Mon, 2022-12-05 10:36

What a difference three words can make.

No, not those ones, these ones MODIFY DEFAULT ATTRIBUTES.

https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/maintenance-partition-tables-indexes.html#GUID-1C8A5365-F6DD-4A81-8450-1B66E8BEA657

create table domtest
(col1 number
,col2 varchar2(200))
partition by range (col1) interval (1)
(partition p0 values less than (1))
compress for oltp;

> Table created.

insert into domtest
with x as (select level lvl from dual connect by level <= 10)
select lvl, rpad(lvl,200,'X') from x;

> 10 row(s) inserted.

commit;

> Statement processed.

select table_name, def_compression, def_compress_for
from   user_part_tables
where  table_name = 'DOMTEST';

TABLE_NAME	DEF_COMPRESSION	DEF_COMPRESS_FOR
DOMTEST	ENABLED	ADVANCED

select table_name, partition_name, high_value, compression, compress_for
from   user_tab_partitions
where  table_name = 'DOMTEST';

TABLE_NAME	PARTITION_NAME	HIGH_VALUE	COMPRESSION	COMPRESS_FOR
DOMTEST	P0	1	ENABLED	ADVANCED
DOMTEST	SYS_P487584	2	ENABLED	ADVANCED
DOMTEST	SYS_P487585	3	ENABLED	ADVANCED
DOMTEST	SYS_P487586	4	ENABLED	ADVANCED
DOMTEST	SYS_P487587	5	ENABLED	ADVANCED
DOMTEST	SYS_P487588	6	ENABLED	ADVANCED
DOMTEST	SYS_P487589	7	ENABLED	ADVANCED
DOMTEST	SYS_P487590	8	ENABLED	ADVANCED
DOMTEST	SYS_P487591	9	ENABLED	ADVANCED
DOMTEST	SYS_P487592	10	ENABLED	ADVANCED
DOMTEST	SYS_P487593	11	ENABLED	ADVANCED

If I want to change the default compression at table level so that new partitions are no longer compressed, then those three words make a big difference.

alter table domtest nocompress;
>Table altered.

This does what we want:

select table_name, def_compression, def_compress_for
from   user_part_tables
where  table_name = 'DOMTEST';

TABLE_NAME	DEF_COMPRESSION	DEF_COMPRESS_FOR
DOMTEST	DISABLED	 - 

But it does a whole lot we don’t. It doesn’t move the existing data but it does change the partition attributes for new data.

select table_name, partition_name, high_value, compression, compress_for
from   user_tab_partitions
where  table_name = 'DOMTEST';

TABLE_NAME	PARTITION_NAME	HIGH_VALUE	COMPRESSION	COMPRESS_FOR
DOMTEST	P0	1	DISABLED	 - 
DOMTEST	SYS_P487584	2	DISABLED	 - 
DOMTEST	SYS_P487585	3	DISABLED	 - 
DOMTEST	SYS_P487586	4	DISABLED	 - 
DOMTEST	SYS_P487587	5	DISABLED	 - 
DOMTEST	SYS_P487588	6	DISABLED	 - 
DOMTEST	SYS_P487589	7	DISABLED	 - 
DOMTEST	SYS_P487590	8	DISABLED	 - 
DOMTEST	SYS_P487591	9	DISABLED	 - 
DOMTEST	SYS_P487592	10	DISABLED	 - 
DOMTEST	SYS_P487593	11	DISABLED	 - 

Whereas if we reset and then use the correct syntax from the get-go:

alter table domtest modify default attributes nocompress;
>Table altered.

select table_name, def_compression, def_compress_for
from   user_part_tables
where  table_name = 'DOMTEST';

TABLE_NAME	DEF_COMPRESSION	DEF_COMPRESS_FOR
DOMTEST	DISABLED	 - 

select table_name, partition_name, high_value, compression, compress_for
from   user_tab_partitions
where  table_name = 'DOMTEST';

TABLE_NAME	PARTITION_NAME	HIGH_VALUE	COMPRESSION	COMPRESS_FOR
DOMTEST	P0	1	ENABLED	ADVANCED
DOMTEST	SYS_P487584	2	ENABLED	ADVANCED
DOMTEST	SYS_P487585	3	ENABLED	ADVANCED
DOMTEST	SYS_P487586	4	ENABLED	ADVANCED
DOMTEST	SYS_P487587	5	ENABLED	ADVANCED
DOMTEST	SYS_P487588	6	ENABLED	ADVANCED
DOMTEST	SYS_P487589	7	ENABLED	ADVANCED
DOMTEST	SYS_P487590	8	ENABLED	ADVANCED
DOMTEST	SYS_P487591	9	ENABLED	ADVANCED
DOMTEST	SYS_P487592	10	ENABLED	ADVANCED
DOMTEST	SYS_P487593	11	ENABLED	ADVANCED

Row Migration

Jonathan Lewis - Mon, 2022-12-05 06:11

This is nothing more than a basic update of a note that I wrote 8 years ago. The update was triggered by a brief comment made by Martin Widlake at the recent UKOUG annual conference “Breakthrough 2022” in Birmingham. In his presentation on “wide tables”, he mentioned row migration and the possible effects of a row having to migrate many times as it grew and the possibility (of which he was not certain as he had only a vague memory of hearing the claim at some unspecified time in the past) that it might leave a “long chain of pointers” from the header to the final location of the migrated row.

It occurred to me that the vague memory might have been due to my blog note from 2014 explaining that this doesn’t happen. If a row migrates (i.e. the whole row gets moved to a new location leaving only a header behind pointing to the new location) then at a future point in time it might migrate to a 3rd (or 4th or more) location and update the header pointer, or it might actually migrate back to the original location if space has since become available.

The following script (originally created on 10gR2, but updated for 19.11 and modified for ease of retesting) builds a table, performans a series of updates on a row, and dumps the header block after each update.

rem
rem     Script:         row_migration.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2014 / Dec 2022
rem
rem     Last tested 
rem             10.2.0.5
rem             19.11.0.0
rem

create table t1 (
        id      number(6,0),
        v1      varchar2(1200)
)
pctfree 0
;


insert  into t1 
select  rownum - 1, rpad('x',100) i
from    all_objects i
where   rownum <= 75
;

commit;

prompt  =========================================
prompt  Get the relative file and block number of 
prompt  the block that hold 74 of the 75 rows, 
prompt  then dump the block to the trace file.
prompt  =========================================

column rel_file_no new_value m_file_no
column block_no    new_value m_block_no

select 
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no, 
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from 
        t1
group by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid) 
having
        count(*) = 74
;

alter system flush buffer_cache;
alter session set tracefile_identifier= 'D1_start';
alter system dump datafile &m_file_no block &m_block_no;

prompt  ======================================
prompt  Make the first row migrate and show it
prompt  ======================================

update t1 set v1 = rpad('x',400) where id = 0;
commit;

alter system flush buffer_cache;
alter session set tracefile_identifier= 'D2_migrate';
alter system dump datafile &m_file_no block &m_block_no;


prompt  ======================================
prompt  Fill the block the long row is now in,
promtp  the make it migrate again
prompt  ======================================

insert  into t1 
select  rownum + 75, rpad('x',100) 
from    all_objects 
where   rownum <= 75
;
commit;

update t1 set v1 = rpad('x',800) where id = 0;
commit;

alter system flush buffer_cache;
alter session set tracefile_identifier= 'D3_migrate_more';
alter system dump datafile &m_file_no block &m_block_no;

prompt  =======================================================
prompt  Fill the block the long row is in and shrink the row 
prompt  to see if it returns to its original block. (No).
prompt  =======================================================

insert  into t1 
select  rownum + 150, rpad('x',100) 
from    all_objects 
where   rownum <= 75
;
commit;

-- delete from t1 where id between 1 and 20;
-- commit;

update t1 set v1 = rpad('x',50) where id = 0;
commit;

alter system flush buffer_cache;
alter session set tracefile_identifier= 'D4_shrink_row';
alter system dump datafile &m_file_no block &m_block_no;


prompt  ==============================================
prompt  Make a lot of space in the original block then
prompt  GROW the row again to see if it migrates back.
prompt  ==============================================

delete from t1 where id between 1 and 20;
commit;

update t1 set v1 = rpad('x',1200) where id = 0;
commit;

alter system flush buffer_cache;
alter session set tracefile_identifier= 'D5_forcemigrate';
alter system dump datafile &m_file_no block &m_block_no;

When the script has run there will be 5 trace files, and single “grep” command to find the row entry in the dump for the first row of the block (row 0) will give you results like the following:

[oracle@linux19c trace]$ grep -A+3 "row 0" *19012*.trc

or19_ora_19012_D1_start.trc:tab 0, row 0, @0xab
or19_ora_19012_D1_start.trc-tl: 106 fb: --H-FL-- lb: 0x1  cc: 2
or19_ora_19012_D1_start.trc-col  0: [ 1]  80
or19_ora_19012_D1_start.trc-col  1: [100]
--
or19_ora_19012_D2_migrate.trc:tab 0, row 0, @0xab
or19_ora_19012_D2_migrate.trc-tl: 9 fb: --H----- lb: 0x2  cc: 0
or19_ora_19012_D2_migrate.trc-nrid:  0x090000ac.1
or19_ora_19012_D2_migrate.trc-tab 0, row 1, @0x115
--
or19_ora_19012_D3_migrate_more.trc:tab 0, row 0, @0xab
or19_ora_19012_D3_migrate_more.trc-tl: 9 fb: --H----- lb: 0x1  cc: 0
or19_ora_19012_D3_migrate_more.trc-nrid:  0x090000b0.7
or19_ora_19012_D3_migrate_more.trc-tab 0, row 1, @0x115
--
or19_ora_19012_D4_shrink_row.trc:tab 0, row 0, @0xab
or19_ora_19012_D4_shrink_row.trc-tl: 9 fb: --H----- lb: 0x2  cc: 0
or19_ora_19012_D4_shrink_row.trc-nrid:  0x090000b0.7
or19_ora_19012_D4_shrink_row.trc-tab 0, row 1, @0x115
--
or19_ora_19012_D5_forcemigrate.trc:tab 0, row 0, @0x4b9
or19_ora_19012_D5_forcemigrate.trc-tl: 1208 fb: --H-FL-- lb: 0x2  cc: 2
or19_ora_19012_D5_forcemigrate.trc-col  0: [ 1]  80
or19_ora_19012_D5_forcemigrate.trc-col  1: [1200]

  • The D1 trace shows you the row with a column count (cc) of 2, and the two column lengths.
  • The D2 trace shows you a column count of zero, and a nrid (next rowid) pointing to row 1 (2nd row) of block 0x090000ac.
  • The D3 trace shows you a column count of zero, and a nrid pointing to row 7 (eighth row) of block 0x090000b0, the row has moved to a new location and the header is pointing directly to the new location.
  • The D4 trace shows exactly the same output – after shrinking (even to a length that is less than it started at) the row has not moved back to the original location.
  • The D5 trace shows that the row has now moved back to its original location, even though it is now several hundred bytes longer than it used to be.

If you’re wondering why the row didn’t move back after shrinking at D4 (and even when I made a lot of space available in the original block the shrink didn’t cause a move), remember that Oracle tries to be “lazy” – the update can take place in situ, so Oracle doesn’t waste time and effort checking the original block.

Footnote

This note makes no claims about what might happen in a more complex case where a row is so long that it splits into multiple row pieces and the pieces end up scattering across multiple blocks. There are a couple of variations on that problem that might be worth investigating if you suspect that there is some behaviour of very wide tables or very long rows that is the source of a performance problem relating to excessive buffer gets or db file sequential reads.

Oracle APEX - Debugging Tip #1

Rittman Mead Consulting - Mon, 2022-12-05 03:00

If I would need this feature I would probably invest a lot of time trying to find out an answer. But funny enough I found out by mistake and as it might be useful to someone else I have just decide to publish this short blog post.

Application Process on Global Page

When working with Report queries Oracle APEX creates an URL that you can use in your buttons or branches to call the report itself. That URL uses the global page 0 with an special REQUEST syntax as showed below.

sample print URL

I used that URL in a branch that fires when the "PRINT_REPORT" button is pressed:

While trying to debug this process you might run into the following problem. If you enable debugging from the Developer toolbar like this:

Enable debug from developer tool bar

or from the URL:

Enable debug from URL

the page 0 process won't be debug. The output of the debug trace from the "View debug" option will be limited to the call of the button on page 1.

View Debug from Developer tool bar when debug level is enabled from the Developer Toolbar or using URL parameter

Same goes for the Session Activity view:

Session Details from Monitor Activity when debug level is enabled from the Developer Toolbar or using URL parameter

On then contrary, if you enable debug from Monitor Activity > Active Sessions > Session Details

Enable debug from Monitor Activity

Debugging happens as well in page 0 processes. The "View debug" show the debug line:

View Debug from Developer tool bar when debug level is enabled from Active Sessions in Activity monitor

Same information is displayed in the Session details:

Session Details from Monitor Activity when debug level is enabled from Active Sessions in Activity monitorOracle APEX bug or feature?

Not sure if that is a bug or a feature and don't think is relevant at this stage. The important bit here (and applies for every aspect of the development cycle) is that we get to know different ways of reaching the same end, different methods to try, to keep us going forward :)

Categories: BI & Warehousing

Globally block a specific SQL Plan Directive?

Tom Kyte - Mon, 2022-12-05 00:26
I work with a team of DBAs to administer an Oracle-based COTS application suite for over forty client departments within our organization. This past summer we upgraded all client environments from Oracle 12c v1 to Oracle 19c, on entirely new and upgraded virtual hardware platforms. Out largets client department has > 20,000 active users. The COTS software package and its data model are primarily designed for transaction throughput and supporting client user browsing actions. It generates large volumes of dynamic SQL statements in response to user actions. For historical reasons, and to support older versions of Oracle, many of the dynamically generated queries include the /* BIND_AWARE */ directive. This has not caused problems in the past, but since the upgrade to Oracle 19c, these directives frequently cause the SQL optimizer to choose vastly inferior, inefficient execution plans. For example, the Optimizer may choose a FULL INDEX SCAN on a secondary index of a table containing 300 million rows. Executing the same query after removing the /* BIND_AWARE */ directive causes the Optimizer to access the same table using a UNIQUE scan of the Primary Key. This results in an execution time in milliseconds as opposed to several minutes (worst case) for the original dfynamic query that includes the directive. Since these queries are generated many times per second during heavy usage periods, they have caused some serious performance problems, to the point of rendering the systems unusable. The COTS vendor has looked at the problem, along with recommendations we received from Oracle Support, but acknowledge that it will be a large effort to identify and remove those directives for clients running recent releases of Oracle. We have verified to our satisfaction that the directives are the cause of the problem because of the differences in the execution plans when they are present or not. This surprised our DBAs because they understood that the Oracle 19c Optimizer ignored directives, but that is apparently not the case. They have been able to work around the problem for some frequently occurring queries by applying SQL patches to force the Optimizer to ignore all directives, but a patch only applies to a single SQL statement / SQL ID. The problem is the large number of dynamically generated queries containing the /* BIND_AWARE */ directive, which results in an indefinite number of distinct SQL statements / SQL IDs. It's like Heracles fighting the Hydra - cut off one head and two more grow back. Is there any way in Oracle 19c to globally disable a specific SQL Plan Directive like /* BIND_AWARE */ ? We do not want to disable all directives because there are others related to specific admin tasks that should be enabled, but we want to be able to suppress any and all occurrences of /* BIND_AWARE */, at least for specific schemas. So far I have not been able to identify any such feature in 19c. Thanks, Patrick
Categories: DBA Blogs

Invoice Annotation with Sparrow/Python

Andrejus Baranovski - Sun, 2022-12-04 07:52
I explain our Streamlit component for invoice/receipt document annotation and labeling. It can be used either to create new annotations or review and edit existing ones. With this component you can add new annotations directly on top of the document image. Existing annotations can be resized/moved and values/labels assigned. 

This component is part of Sparrow - our open-source solution for data extraction from invoices/receipts with ML.

 

GoldenGate PURGEOLDEXTRACTS

Michael Dinh - Sat, 2022-12-03 16:24

There was an issue where GoldenGate trail files consumed 95% of 197G from filesystem.

PURGEOLDEXTRACTS was changed from MINKEEPHOURS 48 to MINKEEPHOURS 20

The above configuration is relative and with increase transactions can still fill up the filesystem.

Why not size the trail file accordingly and keep n trail files?

https://docs.oracle.com/en/middleware/goldengate/core/19.1/reference/purgeoldextracts-manager.html

PURGEOLDEXTRACTS for Manager
MINKEEPFILES n
Keeps at least n trail files, including the active file. The minimum is 1 and the maximum is 100. The default is 1.

File system will never get full because manager will do the clean up.

This is how an extract trail file can be sized when adding.
ADD EXTTRAIL /u09/dirdat/aa EXTRACT e_hawk, MEGABYTES 500

https://docs.oracle.com/en/cloud/paas/goldengate-cloud/gwuad/changing-size-trail-files.html

ALTER EXTTRAIL trail, EXTRACT group, MEGABYTES n
ALTER RMTTRAIL trail, EXTRACT group, MEGABYTES n

When implementing the above, it would be best if trail files reside on a separate mount from GoldenGate Home.

Would you rather have 20 redo logs at 100MB or 10 at 200MB?

There’s no right or wrong. It’s whether the shoe fits or not.

UKOUG Breakthrough 2022 : Day 2

Tim Hall - Fri, 2022-12-02 14:19

Day 2 started pretty much the same as day 1. I arrived late to avoid the traffic. The first session I went to was Martin Nash with “Oracle Databases in a Multicloud World”. I was a bit late to this session, but from what I saw it seemed the general view was “don’t be stupid, … Continue reading "UKOUG Breakthrough 2022 : Day 2"

The post UKOUG Breakthrough 2022 : Day 2 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 2 was first posted on December 2, 2022 at 9:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

UKOUG Breakthrough 2022 : Day 1

Tim Hall - Fri, 2022-12-02 03:26

The evening before the conference the Oracle ACEs met up for some food at a curry place in the city centre. Thanks to the Oracle ACE Program for organising this! Earlier that day I presented my first face to face session in 3 years, and now it was time for my first social event in … Continue reading "UKOUG Breakthrough 2022 : Day 1"

The post UKOUG Breakthrough 2022 : Day 1 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 1 was first posted on December 2, 2022 at 10:26 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to change Launch Configuration to Launch Template in AWS

Pakistan's First Oracle Blog - Thu, 2022-12-01 21:59

Here is step by step guide as  how to change launch configuration to launch template in AWS for an autoscaling group. It's actually quite simple and straight forward.

There has been a notification in AWS that was sent out this week to following accounts that make use of Amazon EC2 Auto Scaling launch configurations. Amazon EC2 Launch Configurations will Deprecate support for new Instances. After December 31, 2022 no new Amazon Elastic Compute Cloud (Amazon EC2) instance types will be added to launch configurations. After this date, existing launch configurations will continue to work, but new EC2 instances will only be supported through launch templates.


In order to update the ASG, you need to follow below steps:


1. Create a launch template and paste the user data scripts in it and save it. Also, make sure that you are using the correct AMI ID in it.


2. Once launch template is created then navigate your respective auto scaling group and in the details section of the ASG, click on "edit" button in the launch configuration section.  There you will get an option on the top like "Switch to Launch Template".


3. Then select your newly created launch template and save the changes

Here is the document to create launch template.

Here is the document to know how to replace a launch configuration with a launch template.

The existing instances will keep in running state. Only new instances will be launched using launch template. On the ASG console, you can check the instance is launched using launch template in the instance management section. 

For the instances perspective testing like application is running or not or instance is working properly or not, for this you can login the instance and verify the details. It will not automatically launch an instance in the ASG after setting it to launch template. you would have to change the desired capacity to launch a new instance using the launch template.

Categories: DBA Blogs

tracking blocking sessions

Tom Kyte - Thu, 2022-12-01 04:46
hello sir, I want to see the blocking sessions that occur during the day to fix the sessions or SQLs that are causing the others to lock up. There is no trace of locker SQLs in the GV$ACTIVE_SESSION_HISTORY . The sql_id and top_level_sql_id fields specify the locked SQLs, but not the locker SQLs. How can I get it? thanks for your help.
Categories: DBA Blogs

Default privileges for new objects

Tom Kyte - Thu, 2022-12-01 04:46
Hi, is there a method to set default system privileges for all new created objects in Oracle, such as tables, sequences, procedures and functions and triggers? For example, select privilege assigned to an Oracle user. How to make it possible without having to write additional code except for e.g. create table, sequence, function etc.? Best regards, Dackard.
Categories: DBA Blogs

Need to if any provision for parallelly copying data from main table to different schema table

Tom Kyte - Thu, 2022-12-01 04:46
Hello Team, I would like to know is there any provision for parallelly copying data from main table (one schema) to different schema table without hampering current traffic? We need such provision/approach which should be faster way of doing this. Please suggest or guide us how we can do this? Please note: we want to create new schema which contains same table like in main schema. As soon as SQL operation happens on main schema table same should get copied to different new schema as well in faster way. We need to use this new schema for only reporting purpose so we need to develop this approach. please guide us on this as soon as possible. Thanks & Regards, Shital
Categories: DBA Blogs

PL/SQL Question , how to write a query to accept start and end booking date and display all hotel reservation between that date.

Tom Kyte - Thu, 2022-12-01 04:46
create table hotel_reservation with following fields. Booking id, booking start date booking end date room type, room rent write PL/SQL block to accept start and end booking date and display all hotel reservation between that date. *** In this question i am stuck in : *** create table hotel_reservation(booking_id number(20),booking_start_date date,booking_end_date date,room_type varchar2(50),room_rent number(20)); insert into hotel_reservation values(1, TO_DATE('10-05-2022', 'dd-mm-yyyy'), TO_DATE('12-05-2022', 'dd-mm-yyyy'), 'Double', 12000); insert into hotel_reservation values(2, TO_DATE('21-07-2022', 'dd-mm-yyyy'), TO_DATE('25-07-2022', 'dd-mm-yyyy'), 'Single', 5000); insert into hotel_reservation values(3, TO_DATE('01-02-2022', 'dd-mm-yyyy'), TO_DATE('02-02-2022', 'dd-mm-yyyy'), 'Luxury', 30000); insert into hotel_reservation values(4, TO_DATE('30-06-2022', 'dd-mm-yyyy'), TO_DATE('01-07-2022', 'dd-mm-yyyy'), 'Double', 10000); insert into hotel_reservation values(5, TO_DATE('15-10-2022', 'dd-mm-yyyy'), TO_DATE('15-10-2022', 'dd-mm-yyyy'), 'Quad', 11000); select * from hotel_reservation; DECLARE book_id hotel_reservation.booking_id%type; book_sdate hotel_reservation.booking_start_date%type := TO_DATE('10-05-2022', 'dd-mm-yyyy'); book_edate hotel_reservation.booking_end_date%type := TO_DATE('15-10-2022', 'dd-mm-yyyy'); r_type hotel_reservation.room_type%type; r_rent hotel_reservation.room_rent%type; BEGIN Select booking_id,booking_start_date,booking_end_date,room_type,room_rent INTO book_id, book_sdate, book_edate,r_type,r_rent FROM hotel_reservation WHERE booking_start_date = book_sdate and booking_end_date = book_edate; dbms_output.put_line('hotel_reservation ' || book_id || ' ' || book_sdate || ' ' || book_edate || ' ' || r_type || ' ' || r_rent); END; *** in the begin block of pl/sql ***
Categories: DBA Blogs

Split String with table function

Tom Kyte - Thu, 2022-12-01 04:46
Hi I use this table function for string split (a space is used as a separator) create or replace FUNCTION fmv_space_to_table(p_list IN VARCHAR2) RETURN fmv_test_type AS l_string VARCHAR2(32767) := p_list || ' '; l_comma_index PLS_INTEGER; l_index PLS_INTEGER := 1; l_tab fmv_test_type := fmv_test_type(); BEGIN LOOP l_comma_index := INSTR(l_string, ' ', l_index); EXIT WHEN l_comma_index = 0; l_tab.EXTEND; l_tab(l_tab.COUNT) := TRIM(SUBSTR(l_string, l_index, l_comma_index - l_index ) ); l_index := l_comma_index + 1; END LOOP; RETURN l_tab; END fmv_space_to_table; The table function works fine: select * from table( fmv_space_to_table( 'A2345 123456 7890 2344')) Output: A2345 123456 7890 2344 When table function "fmv_space_to_table" is used in a query with only number values, it works fine e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( '123456 7890')) ) When table function "fmv_space_to_table" is used in a query with letter + number values, it doesn't work e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( 'A2345')) ) Error: ORA-00904: "A2345": ungultige ID cause of error: my table function returns as output: A2345 but in this case is needed: 'A2345' Question: How can the table function "fmv_space_to_table" be extended so that it can also be used with letters + numbers Values?
Categories: DBA Blogs

ORA-00001 on merge, concurent transactions

Tom Kyte - Thu, 2022-12-01 04:46
When having two simultaneous merges into a table with a unique constraint in different sessions, the second one throws ORA-00001. Steps to reproduce: <code>create table tab1 (col1 number); create unique index ind1 on tab1(col1); --1st session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session now hangs --first session: commit; --second session: --throws ora-00001</code> As far as I know, this might be how Oracle behaves since Merge statment exists and might be considered to work as expected. My objection to this is that merge behaves differently depending on first session being commited or not before the second session starts it's merge. The second session obviously knows it cannot just insert and hangs, waits for the first session to finish. And the second session is only blocked (hangs) if it is working with the same key of unique index. So, again, the second session is obviously aware that there is an uncommited session working on the same record. So when the firts session commits the record to the table (results in insert), the second session already desided that it too should insert and violates the constraint. But why? Shouldn't the second session re-evaluate what the correct actions is? If we tried this with [merge when matched] then the second session hangs until first one commits but even though the constrained columns is updated, there is no issue for the second session to update the same record. Does it make sense or do you think "it works as intended" and shouldn't be addressed?
Categories: DBA Blogs

Birmingham City University (BCU) : The changing role of the DBA

Tim Hall - Thu, 2022-12-01 03:35

Yesterday I took a trip across town to Birmingham City University (BCU) to do a talk to the students. The talk was called “The changing role of the DBA”. It’s been over 3 years since I’ve done a face-to-face presentation. I did some online presentations at the start of lockdown, but it’s been 2 years … Continue reading "Birmingham City University (BCU) : The changing role of the DBA"

The post Birmingham City University (BCU) : The changing role of the DBA first appeared on The ORACLE-BASE Blog.Birmingham City University (BCU) : The changing role of the DBA was first posted on December 1, 2022 at 10:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AWS re:Invent day 3 – Gray failures mitigation

Yann Neuhaus - Wed, 2022-11-30 19:36

So again during the 3rd day of the re:Invent 2022 there were plenty of interesting information to take with us. However, the most interesting part was for sure the workshop about gray failures and how to mitigate them in a multi-AZ environment. Let me try to wrap this up for you…

What are gray failures?

No idea what “gray failures stands” for? Let us try to reminder this situation, we for sure all already experienced, when users say “application ain’t working! Can’t work..” while at the same time on infrastructure side you hear “Nope everything green, no alerts there…..must be the network ;-)”

That’s grey failures. It means a non-binary situation where everything is working and not working at the same time. This kind of distorsion between application / users perception and system one.

It is typically a conjonction of “small” issues that may even not breached any monitoring alerts thresholds, but that taken together makes the application unusable. Some researches even state that the biggest cloud outages are coming from these gray failures as part the so called failure circle.

Using a multi-AZ architecture on AWS, you can indeed try to detect and mitigate these gray failures.

Lets take a example a web based application running on 3 availability zone with load balancer and front-end web servers (as part of an auto scaling group) and a database as backend.

How to detect gray failures?

In case of gray failures our traditional monitoring approach (i.e. CPU > 96%, latency > 60ms, …) isn’t enough. Again the CPU may only be 95%…but the latency may also have reached 59ms…and at the same time we face some unusual amount of retransmitted packets due to a “tired” switch.

Do ou remember about “Murphy Law”? Everything which could go wrong…goes wrong…

On AWS CloudWatch can help you there

Using AWS CloudWatch you will at first create your traditional alerts (CPU, latency, …) but also region availability and latency. Then a second step, CloudWatch will allow you to create so called “composite alarms” which are exactly based on this principle of combining several alarms / metrics using simple AND OR NOT expressions

You can find much more information about detecting such gray failures in the AWS Advanced Multi-AZ Resilience Patterns whitepaper.

How to react?

Now that we have an idea on how we can detect these gray failure, the obvious questions is “how to react when it happens”.

The principle is pretty “easy”: Evict the AZ which is facing issues from our Multi-AZ architecture

This is the same principle than a cluster evicting a failed node.

To do so we can act on 2 levels:

1. Stop sending request to the LB of the affecting Load Balancer by deactivating this “route” in our Route53 DNS configuration

2. Stopping the instances in the faulty AZ and making sure the auto scaling group will bootstrap new ones in the remaining AZ

Disabling the DNS in Route53

Here the global idea. When the alert is raised you going to set a simple value in a DynamoDB table staing that the corresponding AZ in unhealthy.

On Route53 you will have linked each DNS entry to an Health Check

What does this health check, which has been configured priori in Route53?
It will call the Amazon API Gateway to do a REST request to the DynamoDB table to check the healthy state of the corresponding AZ is.

If the REST API comes back stating that the AZ is not healthy anymore (by sending a 4xx code back) then Route53 won’t send this path back until it get healthy again.

You can also have a look on the AWS article Creating Disaster Recovery Mechanisms Using Amazon Route 53.

Updating the auto scaling group

So now that our users aren’t sent to the faulty AZ anymore, the next step is to update the auto scaling group for my web front-end instances. This should automatically shutdown the one in the faulty AZ and start new ones in the remaining AZ.

To do so the idea is again to use a table in DynamoDB in which we will store the subnets of the faulty AZ.

Finally we will simply run a Lambda function to update the auto scaling group to remove those subnets. As a consequence the instances running in them will be shutted down and new ones will be starting in the remaining AZ.

L’article AWS re:Invent day 3 – Gray failures mitigation est apparu en premier sur dbi Blog.

Pages

Subscribe to Oracle FAQ aggregator