No question at this time
DBA Top 10
1 A. Kavsek 8400
2 B. Vroman 5900
3 M. Cadot 5400
4 P. Wisse 4800
5 T. Boles 3700
6 J. Schnackenberg 2100
7 . Lauri 1500
8 R. Pattyn 700
9 T. P 600
9 J. PĂ©ran 600
About
DBA-Village
The DBA-Village forum
Forum as RSS
as RSS feed
Site Statistics
Ever registered users48407
Total active users1518
Act. users last 24h4
Act. users last hour1
Registered user hits last week254
Registered user hits last month927
Go up

optimizer statistics
Next thread: consistent-read of a block
Prev thread: Not able to down the CRS services for Os patching

Message Score Author Date
Hi Guys I am running import using datapump, now...... Tso P Sep 15, 2018, 18:23
Hello Tso, - Oracle 11.1 :-( Hmmmmm - what y...... Score: 300 PtsScore: 300 PtsScore: 300 PtsScore: 300 PtsScore: 300 Pts Bruno Vroman Sep 16, 2018, 10:39
Thanks Bruno I can't seem to see what the syste...... Tso P Sep 16, 2018, 18:58
I had an active trigger on a pre-existing table an...... Score: 100 PtsScore: 100 PtsScore: 100 PtsScore: 100 PtsScore: 100 Pts Tso P Sep 17, 2018, 00:39
Thank you for the feedback. Good that you found t...... Score: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 Pts Bruno Vroman Sep 17, 2018, 08:57
Thanks a lot for the reply. ... Tso P Sep 19, 2018, 11:09

Follow up by mail Click here


Subject: optimizer statistics
Author: Tso P, South Africa
Date: Sep 15, 2018, 18:23, 338 days ago
Os info: Red Hat Enterprise Linux Server release 5.11 (Tikanga)
Oracle info: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
Message: Hi Guys

I am running import using datapump, now I have issues as this import normally takes 6 hours and now for one tables its about 7 hours now.

We have not used this environment for about a year now.

When I check the awr report I get these :


Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class
DB CPU 3,432 96.10
db file sequential read 49,082 90 2 2.51 User I/O
Datapump dump file I/O 3,619 51 14 1.43 User I/O
log file switch completion 17 0 9 0.00 Configuration
log file sync 88 0 1 0.00 Commit



Load Profile

Per Second Per Transaction Per Exec Per Call
DB Time(s): 1.0 0.0 0.00 5.41
DB CPU(s): 1.0 0.0 0.00 5.20
Redo size: 2,136,455.2 33,028.6
Logical reads: 107,840.7 1,667.2
Block changes: 18,847.4 291.4
Physical reads: 1,317.6 20.4
Physical writes: 171.2 2.7
User calls: 0.2 0.0
Parses: 69.7 1.1
Hard parses: 0.0 0.0
W/A MB processed: 164,977.0 2,550.5
Logons: 0.1 0.0
Executes: 30,634.1 473.6
Rollbacks: 0.0 0.0
Transactions: 64.7


Is it ok, if I can compile the optimizer stats while the import is running?

Or is there anything I should focus on based on the above awr stats?

Please help!!!

Thanks in advance...

Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: optimizer statistics
Author: Bruno Vroman, Belgium
Date: Sep 16, 2018, 10:39, 337 days ago
Score:   Score: 300 PtsScore: 300 PtsScore: 300 PtsScore: 300 PtsScore: 300 Pts
Message: Hello Tso,

- Oracle 11.1 :-( Hmmmmm

- what you show is not a busy system... Don't you see very high "waits" on IO somewhere? The system is waiting on something, what?

- about statistics and impdp: by default importing the statistics is pat of the impdp session so I don't think you should gather stats yourself concurrently. But this part is often incredibly slow so I skip it every time (except when I forget). The best is to skip the statistics at expdp time (EXCLUDE=statistics), but if not done at expdp time it can still be done at impdp time (same parameter). Annoying detail: if the dmp file was generated with "exclude stats" and if you specify again "exclude stats" at impdp time this causes an error (fortunately at the beginning of the session) stating that there are no stats to exclude.

- I am currently busy with a SR about slow expdp/impdp when using NFS filesystems -but this is about Oracle 12.1-. Amazing facts (that we had not seen with previous hardware; maybe linked to mount option of NFS filesystems) but for example: datapump with a local filesystem can be 150 times faster than with a NFS one... (exact same hardware and filesystems config: all OK if using Oracle 12.2) So: are you using a local filesystem or not?


Best regards,

Bruno Vroman.
Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: optimizer statistics
Author: Tso P, South Africa
Date: Sep 16, 2018, 18:58, 336 days ago
Message: Thanks Bruno

I can't seem to see what the system is waiting for, I am not using the local filesystem.

Thanks for the reply...

Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: optimizer statistics
Author: Tso P, South Africa
Date: Sep 17, 2018, 00:39, 336 days ago
Score:   Score: 100 PtsScore: 100 PtsScore: 100 PtsScore: 100 PtsScore: 100 Pts
Message: I had an active trigger on a pre-existing table and that table had a bout 100 million records.

I do have a script that disable the triggers before the import but that did not as I made mistake on the where clause.

I changed the script and re-run the import after disabling the triggers and now the import is running fine.

Thanks for the reply...
Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: optimizer statistics
Author: Bruno Vroman, Belgium
Date: Sep 17, 2018, 08:57, 336 days ago
Score:   Score: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 Pts
Message: Thank you for the feedback.
Good that you found the issue.
Except if you really need this (append data to existing table), it is usually advisable to drop the existing tables before import instead of using an option like "truncate", because in this later case each and every index will have to be maintained for each and every row inserted, as well as check of constraints, and in your case triggers (normally if triggers are fired this will cause problems, like re-registering the data a second time in an audit table).
Easier: drop the table, then impdp will create the table, and only AFTER this it will create the indexes, constraints, triggers.

Best regards,

Bruno
Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: optimizer statistics
Author: Tso P, South Africa
Date: Sep 19, 2018, 11:09, 334 days ago
Message: Thanks a lot for the reply.
Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here