Skip to main content

How To Start, Stop and Restart Oracle Listener

Display Oracle Listener Status

$ lsnrctl status

LSNRCTL for Linux: Version 11.1.0.6.0 - Production on 04-APR-2009 16:27:39

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.2)(PORT=1521)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Linux Error: 111: Connection refused
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC)))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Linux Error: 2: No such file or directory



If the Oracle listener is running, you’ll get the following message.

$ lsnrctl status

LSNRCTL for Linux: Version 11.1.0.6.0 - Production on 04-APR-2009 16:27:02

Copyright (c) 1991, 2007, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.1.0.6.0 - Production
Start Date 29-APR-2009 18:43:13
Uptime 6 days 21 hr. 43 min. 49 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/11.1.0/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/devdb/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.2)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC)))
Services Summary...
Service "devdb" has 1 instance(s).
Instance "devdb", status UNKNOWN, has 1 handler(s) for this service...
Service "devdb.thegeekstuff.com" has 1 instance(s).
Instance "devdb", status READY, has 1 handler(s) for this service...
Service "devdbXDB.thegeekstuff.com" has 1 instance(s).
Instance "devdb", status READY, has 1 handler(s) for this service...
Service "devdb_XPT.thegeekstuff.com" has 1 instance(s).
Instance "devdb", status READY, has 1 handler(s) for this service...
The command completed successfully




Start Oracle Listener


$ lsnrctl start


Stop Oracle Listener

$ lsnrctl stop

Restart Oracle Listener


$ lsnrctl reload

Comments

Popular posts

Hive Parse JSON with Array Columns and Explode it in to Multiple rows.

 Say we have a JSON String like below -  { "billingCountry":"US" "orderItems":[       {          "itemId":1,          "product":"D1"       },   {          "itemId":2,          "product":"D2"       }    ] } And, our aim is to get output parsed like below -  itemId product 1 D1 2 D2   First, We can parse JSON as follows to get JSON String get_json_object(value, '$.orderItems.itemId') as itemId get_json_object(value, '$.orderItems.product') as product Second, Above will result String value like "[1,2]". We want to convert it to Array as follows - split(regexp_extract(get_json_object(value, '$.orderItems.itemId'),'^\\["(.*)\\"]$',1),'","') as itemId split(regexp_extract(get_json_object(value, '$.orderItems.product'),'^\\["(.*)\\"]$',1),...




Scala Spark building Jar leads java.lang.StackOverflowError

  Exception -  [Thread-3] ERROR scala_maven.ScalaCompileMojo - error: java.lang.StackOverflowError [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.generic.TraversableForwarder$class.isEmpty(TraversableForwarder.scala:36) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.isEmpty(ListBuffer.scala:45) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.toList(ListBuffer.scala:306) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.ListBuffer.result(ListBuffer.scala:300) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.Stack$StackBuilder.result(Stack.scala:31) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.mutable.Stack$StackBuilder.result(Stack.scala:27) [Thread-3] INFO scala_maven.ScalaCompileMojo - at scala.collection.generic.GenericCompanion.apply(GenericCompanion.scala:50) [Thread-3] INFO scala_maven.ScalaCompile...




org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.;

  Caused by: org.apache.spark.sql.AnalysisException: Cannot overwrite a path that is also being read from.; at org.apache.spark.sql.execution.command.DDLUtils$.verifyNotReadPath(ddl.scala:906) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:192) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis$$anonfun$apply$1.applyOrElse(DataSourceStrategy.scala:134) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267) at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70) at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266) at org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256) at org.apache.spark.sql.execution.datasources.DataSourceAnalysis.apply(DataSourceStrategy.scala:134) at org.apache.spark.sql.execution.dataso...




MongoDB Chunk size many times bigger than configure chunksize (128 MB)

  Shard Shard_0 at Shard_0/xyz.com:27018 { data: '202.04GiB', docs: 117037098, chunks: 5, 'estimated data per chunk': '40.4GiB', 'estimated docs per chunk': 23407419 } --- Shard Shard_1 at Shard_1/abc.com:27018 { data: '201.86GiB', docs: 116913342, chunks: 4, 'estimated data per chunk': '50.46GiB', 'estimated docs per chunk': 29228335 } Per MongoDB-  Starting in 6.0.3, we balance by data size instead of the number of chunks. So the 128MB is now only the size of data we migrate at-a-time. So large data size per chunk is good now, as long as the data size per shard is even for the collection. refer -  https://www.mongodb.com/community/forums/t/chunk-size-many-times-bigger-than-configure-chunksize-128-mb/212616 https://www.mongodb.com/docs/v6.0/release-notes/6.0/#std-label-release-notes-6.0-balancing-policy-changes




AWS EMR Spark – Much Larger Executors are Created than Requested

  Starting EMR 5.32 and EMR 6.2 you can notice that Spark can launch much larger executors that you request in your job settings. For example - We started a Spark Job with  spark.executor.cores  =   4 But, one can see that the executors with 20 cores (instead of 4 as defined by spark.executor.cores) were launched. The reason for allocating larger executors is that there is a AWS specific Spark option spark.yarn.heterogeneousExecutors.enabled (exists in EMR only, does not exist in Open Source Spark) that is set to true by default that combines multiple executor creation requests on the same node into a larger executor container. So as the result you have fewer executor containers than you expected, each of them has more memory and cores that you specified. If you disable this option (--conf "spark.yarn.heterogeneousExecutors.enabled=false"), EMR will create containers with the specified spark.executor.memory and spark.executor.cores settings and will not co...