Recently came across an issue where the Managed servers were failing with below error. Even though servers start and come to running mode, but the JMS queues are not showing up in admin console under JMS Servers->Monitoring -> Active Destinations
<Error> <JMS> <BEA-040123> <Failed to start JMS
Server "JMS_DEV_SERVER1" due to weblogic.jms.common.JMSException:
weblogic.messaging.kernel.KernelException: Ignoring 2PC record for
sequence=4706 queue=57 because the element cannot be found.
weblogic.jms.common.JMSException:
weblogic.messaging.kernel.KernelException: Ignoring 2PC record for
sequence=4706 queue=57 because the element cannot be found
at weblogic.jms.backend.BackEnd.open(BackEnd.java:1008)
at weblogic.jms.deployer.BEAdminHandler.activate(BEAdminHandler.java:200)
at
weblogic.management.utils.GenericManagedService.activateDeployment(GenericManagedService.java:239)
at weblogic.management.utils.GenericServiceManager.activateDeployment(GenericServiceManager.java:131)
at
weblogic.management.internal.DeploymentHandlerHome.invokeHandlers(DeploymentHandlerHome.java:632)
Truncated. see log file for complete stacktrace
This issue was after effect of the JMS file persistence store hitting 100% space utilization. As a result of which it got corrupted and was causing the JMS servers to fail while starting/activating.
To workaround this error, go to the Persistence File Store location (local/Shared SAN storage) and rename the existing *.DAT files to _bkp (say). Then go ahead and restart the managed servers. Now servers should come up fine without any issues/errors.
Oracle Metalink Note 1473826.1 "FileStore getting corrupted and WLS is unable to initialize JMS leading to BEA-040123 " suggests applying a Weblogic patch for Bug13900234. I haven't tried applying this patch but if issue persists inspite of workaround you can try this option as well.
Oracle Metalink Note 1473826.1 "FileStore getting corrupted and WLS is unable to initialize JMS leading to BEA-040123 " suggests applying a Weblogic patch for Bug13900234. I haven't tried applying this patch but if issue persists inspite of workaround you can try this option as well.