Skip to main content

Posts

Perforce (p4) to Git migration

This process use  Git-p4   as an import tool. Please follow below steps:- Install Python, Perforce client, Git bash on your machine. Download python script ( git-p4.py ) from given location@  https://raw.githubusercontent.com/git/git/master/git-p4.py Set following environment variables:- P4PORT=public.perforce.com:1666 P4USER=testgitp4 Run following command to import P4 project supplying project depot path on Perforce server and the path into which you want to import project. $ python git-p4.py //depot/myproject@all /e/git/myproject           Importing from //depot/myproject@all into /e/git/myproject Initialized empty Git repository in /private/tmp/p4import/.git/ Import destination: refs/remotes/p4/master Importing revision 2153 (100%) Chand directory to  /e/git/myproject At this point you’re almost done. If you run git log, you can see your imported work. $ git log -2 commit 33ddd3a8c5c1eda6eace15be3sss3c318b32c39 Author: git p4 <g

ORA-28002: the password will expire within 7 days

Change the password or follow below steps to update password life limit to UNLIMITED. Step 1: Identify the Users Profile SQL> SELECT profile FROM dba_users WHERE username = 'USER'; Step 2: View the Profile settings SQL> select resource_name, resource_type, limit from dba_profiles where profile='USER_PROFILE' Step 3: Set PASSWORD_LIFE_TIME SQL> ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED; Step 4: Re-Enter the Password SQL> alter user USER1 identified by "password";

JMS Consumer (onMessage()) delay in getting message from Oralce AQ

I have an application where I have implemented Oracle AQ. I ran in to a behavior where average time for processing varied as depicted in graph below: In above graph when volume of orders was less, average processing time came out to be more whereas when load increased with time, average time for processing got constant and then when volume started declining, time again started increasing. I analyzed the behavior and found that there is delay in message consumption after message has been produced to AQ. On further analysis I found that AQjmsListenerWorker goes for sleep if message is not available for consumption and sleep time doubles each time (up to peak limit) if message is not available for consumption. Thus optimizing resource utilization if there is no messages in AQ for consumption. On enabling ( -Doracle.jms.traceLevel=6 ) diagnostics logs for aq api.  I analyzed that Listener thread sleep time doubles till 15000 ms (15 sec), starting with default value 1000 ms,

CLOB Argument In EXECUTE IMMEDIATE

Oracle EXECUTE IMMEDIATE statement implements Dynamic SQL in Oracle. Before Oracle 11g, EXECUTE IMMEDIATE supported SQL string statements. Oracle 11g allows the usage of CLOB datatypes as an argument which eradicates the constraint we faced on the length of strings when passed as an argument to Execute immediate. But a PLSQL block written on Oracle 11g with  C LOB datatypes as an argument to  EXECUTE IMMEDIATE  will not be executed on Oracle 10g.

Oracle 11g export using EXP utility missing some tables

Problem:- I have recently exported a 11g schema using 11g EXP utility and tried to import into another 11g Instance using 11g IMP utility. But, not all the tables got transferred to the destination instance. On further debugging, I found out that empty tables i.e. tables with NO ROWS (0 rows) did not get exported to dump thus they were missing. Cause:- This is due to oracle feature "Segment creation on Demand (Deferred Segment Creation). Solution :- 1) Use the new Oracle Data Pump utilities for the export and import:  expdp/ impdp instead of exp/imp 2) Turn off the Oracle feature before creating any object     ALTER SYSTEM SET DEFERRED_SEGMENT_CREATION=FALSE; 3) Force the allocation of extents on each empty table using the following command     ALTER TABLE <table_name> ALLOCATE EXTENT;     Re-run the export EXP command, which would export the empty tables as well.