No question at this time
DBA Top 10
1 M. Cadot 12200
2 B. Vroman 6300
3 A. Kavsek 5600
4 J. Schnackenberg 3200
5 P. Wisse 2900
6 J. Péran 2200
7 T. Boles 600
8 Z. Hudec 400
8 G. Lambregts 400
8 N. Havard 400
8 F. Pachot 400
8 D. Dave 400
About
DBA-Village
The DBA-Village forum
Forum as RSS
as RSS feed
Site Statistics
Ever registered users48570
Total active users1387
Act. users last 24h1
Act. users last hour0
Registered user hits last week32
Registered user hits last month257
Go up

A few little things after migrating to 19.9
Next thread: Automatic Memory Management or not ? What is your opinion ?
Prev thread: READ WRITE OPERATIONS

Message Score Author Date
Hi everyone, I am facing a few strange behav...... Joël Péran Dec 14, 2020, 08:23
Hi everybody, Some news about point #2. ...... Score: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 Pts Joël Péran Dec 17, 2020, 09:08

Follow up by mail Click here


Subject: A few little things after migrating to 19.9
Author: Joël Péran, France
Date: Dec 14, 2020, 08:23, 33 days ago
Os info: Windows 2K16
Oracle info: Oracle 19.9
Message: Hi everyone,

I am facing a few strange behaviour after migrating to 19.9. And nothing really helpful found on the support site.

1 / after migration, (and even in 12c) I have enabled the DRCP feature in order to get pooled connections for our databases. And the alert log began to grow (every 5 or 6 seconds a line is added with TNS-12170 error). The MOS says that there is a patch (can't find it) and that future versions are not impacted (I am in 19.9) ... Does anybody have encountered this ?
2 / not all the databases migrated, about 7 days after migration we are advised to run the fixed objects stats gathering process. So last Friday, I run the dbms_stats.gather_fixed_object_stats .... and it is still running : no locked object, no message in alert log no nothing ... just the process still working. I have contacted Mike Dietrich about that and sent him alert log and an awr report (nothing in this report as well). Still nothing ... So have you faced this issue already ? As soon as I get an answer, I will post it there in order to help other people as well ...

Regards,
Joel
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here

Subject: Re: A few little things after migrating to 19.9
Author: Joël Péran, France
Date: Dec 17, 2020, 09:08, 30 days ago
Score:   Score: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 PtsScore: 400 Pts
Message: Hi everybody,

Some news about point #2.
I have checked a Mike Dietrich post about that on his site. Some fixed tables may not have their statistics gathered (almost 14 tables). So I used a script and arranged it to gather informations while manually gathering statistics on these tables.

set serveroutput on;
set serveroutput on;
declare f utl_file.file_type;
cpt number;
debut number;
fin number;
BEGIN
cpt:=0;
for i in (select V$FIXED_TABLE.name,type from fixed_obj$,V$FIXED_TABLE
where fixed_obj$.obj#=V$FIXED_TABLE.object_id
and V$FIXED_TABLE.name not in ('X$KSMMEM', 'X$LOGMNR_CONTENTS','X$KZEKMFVW','X$JOXFT','X$JOXFM','X$CDBVW$','X$COMVW$','X$OBLNK$')
) loop
cpt:=cpt+1;
f:=utl_file.fopen('RTSPV_LOG_DIRECTORY','calcul_stats_01.trc','a');
debut:=dbms_utility.get_time();
utl_file.put_line(f,'Calcul statistiques table #'||to_char(cpt)||' : '||i.name);
SYS.DBMS_STATS.GATHER_TABLE_STATS (
ownname => ''
,TabName => i.name
,Estimate_Percent => SYS.DBMS_STATS.AUTO_SAMPLE_SIZE
,Method_Opt => 'FOR ALL COLUMNS SIZE AUTO '
,Degree => NULL
,Cascade => DBMS_STATS.AUTO_CASCADE
,No_Invalidate => DBMS_STATS.AUTO_INVALIDATE
,Force => FALSE);

fin:=dbms_utility.get_time();
utl_file.put_line(f,'Temps : ' ||to_char(fin-debut)||' s');
utl_file.fflush(f);
utl_file.fclose(f);
end loop;
END;
/

In parallel, while running this in sql*plus, I was checking the output file. Each time a table was involved by the process, the output file was modified. And the script was taking a long time just after a particular table. So I ran a simple sql query to see what was the next table to be used by the script and found that another table has to be added to the exception list in the script : 'X$KJAC_ID' . Once I added this table in the list, the script went well and all the tables statistics were gathered. This is still a bug but at last, we have a solution to bypass it. I will open a SR and add this to comments.
There is a new autoupgrade version out now but I think I will not use it as I'm using self written scripts involving the oracle dbupgrade feature. And my scripts are doing post upgrade fixups, time zone change and insert into the cluster role via failsafe :) ... Better than autoupgrade finally :)

I hope this will help you !
Best regards
Joel
Your rating?: This reply is Good Excellent
Goto: Reply - Top of page 
If you think this item violates copyrights, please click here