Wednesday, December 23, 2009

How WLS Serves Request

1. A client contacts the ListenThread, the entry point into WebLogic Server, which accepts the connection. It then registers the socket with a WLS component known as the SocketMuxer for further processing.

2. The SocketMuxer is responsible for reading and dispatching all client requests to the proper WLS container. It then adds this socket to an internal data structure for processing and makes a request of an ExecuteThreadManager to create a new SocketReaderRequest. This request is then dispatched by the manager to an ExecuteThread

3. As a result, the ExecuteThread becomes a SocketReader thread - it will continually run the SocketMuxer’s processSockets and checks the muxer’s queue to determine if there is work to be done. If an entry exists, it pulls it off the queue and processes it.

4. The SocketReader thread reads the client request and determines the protocol type of this client request, to create a new protocol specific MuxableSocket.

5. The MuxableSocketDiscrminator stores a new MuxableSocket with the implementation matching the protocol of the client. It also returns true to the SocketReader to notify it that the message is complete and it can be dispatched.

6. MuxableSocketDiscriminator re-registers the protocol specific version of the MuxableSocket that was created earlier. The net result is that “Step 2” is repeated, and a new protocol specific MuxableSocket is placed in the SocketMuxer’s queue for processing.

7. A socket reader will get the new protocol specific MuxableSocket off the queue and read it. It will then checks to see if the message is complete, based on the protocol. If it is, it will invoke the protocol specific MuxableSocketDiscriminator

8. Before the work requested by the client can be performed, there may be many iterations of “step 7”. This is determined by the protocol – for example, t3 will read a portion of the message, dispatch it so it can act upon the portion of the protocol read thus far.

9. The subsystem will create an ExecuteRequest and send it to an ExecuteThreadManager for processing. The request is dispatched to an ExecuteThread, and the result is returned to the client.


From a high level overview’s perspective, the SocketMuxer can be explained as follows. Each and every socket connection that comes into WebLogic Server is “registered” with the SocketMuxer - which then maintains a list of these connections, each represented by a form of the MuxableSocket interface. It then becomes the responsibility of the SocketMuxer to read and dispatch each client request to the appropriate subsystem. This is a fairly elaborate process, which is illustrated by steps 2 through 8 above.

There are only a few key things to know about the SocketMuxer:

First, it has a data structure in which it stores a socket entry for each client connected to WebLogic Server.
Second, a “socket reader” is the main component of the SocketMuxer - which really is just an execute thread that is running the SocketMuxer’s processSockets() method.

Third, the SocketMuxer does most of its work through the only interface it knows how to operate on – the MuxableSocket interface.

Socket Reader:

A SocketReaderRequest is merely an implementation of an ExecuteRequest, which is sent to the ExecuteThreadManger by the invocation of the registerSocket(). When the ExecuteThread invokes the execute() method of the SocketReaderRequest, the SocketMuxer’s processSockets() method is invoked.
So, a socket reader thread is simply a normal execute thread which runs the main processing method of the SocketMuxer, processSockets().


The acceptBacklog parameter of Weblogic server is passed to ServerSocket. The value of acceptBacklog means "The maximum queue length for incoming connection indications (a request to connect) is set to the backlog parameter. If a connection indication arrives when the queue is full, the connection is refused. "

Thus if too many connects come on the server at the same time, the server would queue this connects and process them one at a time. The value does not mean that only that many clients can connect to the server.

It does not limit the number of connections made. It limits the number of potential connections that can lie in the backlog queue. So for e.g.: AcceptBacklog is 2. If hundreds of connections were made to the server and the server has one thread to accept new connections.

This thread accepts the new connection and dispatches it to a new thread and then goes back to listening to new connections. Sample code is

while (true) {

Socket sock = serversocket.accept(); // Line 1
new MyThread(sock).run(); // Line 2

}

Here the thread accepts a new connection at line1. Dispatches to new thread in line 2. Evaluates the while expression and goes back to line 1. So in between the time it takes for it to get back to line 1(say T1) many new connections requests are made by the clients. These new connections lie on the accept backlog queue and this queue length is controlled by the accept backlog parameter.

If the queue length is 2, and between this time T1 several hundred connections are made to the server only 2 would get accepted and rest of them rejected. For rejection there must be too many simultaneous requests to the server, if it’s not simultaneous then the chances of queue getting full is less.

Note: Thanks to Sreedevi for helping me to provide this valuable information.

Saturday, December 19, 2009

How do we implement a daemon (a program running all the time, like a scheduled program with a fixed-rate) inside Weblogic?

The best approach is to create an empty servlet, let's call it StartupServlet, that has standard web.xml option set to non-zero integer value. Place your code in the implementation of the servlet's "init" method. It's guaranteed that this method will be called on the server startup, that it will be called once, and that by that time all the server's services, including EJB, will be up.

Example:
import javax.servlet.*;
public class StartupServlet extends HttpServlet {

public void init(ServletConfig servletConfig) throws ServletException {
super.init(servletConfig);
ServiceManager serviceManager = ServiceManager.getInstance();
serviceManager.startServices();
}


public void destroy() {
ServiceManager serviceManager = ServiceManager.getInstance();
serviceManager.stopServices();
super.destroy();
}
}


web.xml




Tweaking JVM parameters for admin and managed server

If we want to give different memory arguments (preferably -Xms and -Xmx) for Admin server and Managed server, both are on same box.

We cannot provide the arguments in startManagedWeblogic.cmd script since it calls startWeblogic file which in turn calls setDomainEnv script.

If we set the parameters in setDomainEnv script then it will get reflected on both the servers (Admin and Managed server).

Solution: To tweak the JVM parameters, create the scripts as follows, for example:

startAdminWithCustomMemArgs.cmd that does the following:

set USER_MEM_ARGS=-Xms256m -Xmx512m
startWeblogic.cmd

And similarly for a managed instance:

startMgdWithCustomMemArgs.cmd:

set USER_MEM_ARGS=-Xms512m -Xmx1024m
startManagedWeblogic.cmd MS1

In the second script above “MS1” is the name of the managed server.

Wednesday, December 16, 2009

Weblogic FAQs

-->I wanna share some common Questions related to Weblogic Server which would be useful to any weblogic professional.

Cluster Related Questions: Q. What is weblogic cluster?
Ans: A Weblogic cluster is logical group of weblogic server instances working together for coordination. A cluster appears to clients to be a single WebLogic Server instance whether it is a web client or a java application.

Q. What is load balancing?
Ans: Equally distributing the load across the servers to serve the application in a smoother way.
Algorithms used for load balancing are: Round Robin, Weight based, and Random.
Server Affinity load balancing algorithms (round-robin-affinity, weight-based-affinity, and random-affinity)

Q. How does a server know when another server is unavailable?
Ans: WebLogic Server uses two mechanisms to determine if a given server instance is unavailable. Each WebLogic Server instance in a cluster uses multicast to broadcast regular “heartbeat” messages that advertise its availability. By monitoring heartbeat messages, server instances in a cluster determine when a server instance has failed. The other server instances will drop a server instance from the cluster, if they do not receive three consecutive heartbeats from that server instance
WebLogic Server also monitors socket errors to determine the availability of a server instance.
For example, if server instance A has an open socket to server instance B, and the socket unexpectedly closes, server A assumes that server B is offline.

Q. How do clients learn about new WebLogic Server instances?
Ans: Once a client has done a JNDI lookup and begins using an object reference, it finds out about new server instances only after the cluster-aware stub has updated its list of available servers.

Q. How do stubs work in a WebLogic Server cluster?
Ans: Clients that connect to a WebLogic Server cluster and look up a clustered object obtain a replica-aware stub for the object. This stub contains the list of available server instances that host implementations of the object. The stub also contains the load balancing logic for distributing the load among its host servers.

Q. What is multicast test?
Ans: The Multicast Test is a network paradigm in which machines communicate each other. This utility helps you debug multicast problems when configuring a Weblogic Cluster. The utility sends out multicast packets and returns information about how effectively multicast is working on your network.
As per the general recommendation do not perform the Multicast test when Managed servers are running.

This is because Managed servers communicate each other in Cluster to share the actual data and when we do Multicast test there will be confusion between actual data and test data.

If your environment is in Production and you cannot bring Managed servers down then you can change Multicast test address and can perform the test to see the results.

You can also run some networking diagnostics to make sure there are no hardware issues affecting communication among the cluster nodes.

At a minimum, the ping utility can be used to verify that network communication is unhindered (in both directions) between each of the server addresses.

Q. Which services cannot be clustered?
Ans: File services, Time services. They cannot be deployed homogenously through a cluster and they do not benefit from any of the clustering services.

Q. Which protocol is used for multicast?
Ans: UDP - User Datagram Protocol

Q. What is the range of multicast address assignments?
Ans: The multicast addresses are in the range 224.0.0.0 through 239.255.255.255.

Q. What is difference between unicast and multicast?
Ans: Weblogic server provides an alternative to using multicast to handle cluster messaging and communications. Unicast configuration is much easier because it does not require cross network configuration that multicast requires.
Additionally, it reduces potential network errors that can occur from multicast address conflicts. Multicast is communication between a single sender and multiple receivers on a network. Typical uses include the updating of mobile personnel from a home office and the periodic issuance of online newsletters.
Multicast enables a single device to communicate with a specific set of hosts, not defined by any standard IP address and mask combination. This allows for communication that resembles a conference call. Anyone from anywhere can join the conference, and everyone at the conference hears what the speaker has to say. The speaker's message isn't broadcasted everywhere, but only to those in the conference call itself. A special set of addresses is used for multicast communication.
Unicast is communication between a single sender and a single receiver over a network. Unicast servers provide a stream to a single user at a time, while multicast servers can support a larger audience by serving content simultaneously to multiple users.
Multicasting exists as UDP multicasting, whereas non-multicast UDP (and TCP) messages are called unicast. The next thing to know is that multicast will often not be sent over a router to another network.
The TTL is low for most multicast packets. TTL on IP packets refers to the maximum number of network hops that a packet can make to get to its destination. Unicast packets typically are allowed to cross about 30 networks.
The above mentions the basic differences between multicast and unicast communication. The unicast communication should be faster as it implements the point to point communication within servers. Comparatively both the multicast and unicast communications have their own advantages as well as disadvantages.

Thread Related Questions:

Q. How are new threads assigned?
Ans: As work enters a Weblogic Server, it is placed in an execute queue. This work is then assigned to a thread within the queue that performs the work.
By default, a new server instance is configured with a default execute queue, weblogic.kernel.default, that contains 15 threads (in development) and 25 threads (in production).
In addition, Weblogic Server provides two other pre-configured queues:
weblogic.admin.HTTP—Available only on Administration Servers, this queue is reserved for communicating with the Administration Console; you cannot reconfigure it.
weblogic.admin.RMI—Both Administration Servers and Managed Servers have this queue; it is reserved for administrative traffic; you cannot reconfigure it.
Unless you configure additional execute queues, and assign applications to them, web applications and RMI objects use weblogic.kernel.default.

Q. Is there an ability to configure the container to kill stuck threads after X amount of time? Ans: Currently, Weblogic reports that a thread is stuck after X amount of time, but the thread keeps running.
Once the thread is stuck or processing it cannot be killed.
If they are not stuck or processing the request then they are called idle thread that is waiting for request (waitforrequest() method: A thread invokes this method when it is waiting for work to be assigned to it by its manager).

Q. Is there a way to manually kill a stuck thread?
Ans: No, once thread is created it would be reused but cannot be killed.

Q. Why does the user “WLS Kernel” own so many threads?
Ans: WLS Kernel owns only the number of the threads which are configured as a default.

Q. What is “WLS Kernel” doing with these threads?
Ans: As specified earlier it is kept in default execute queue and then assigned to a thread within the queue that performs the work.

Q. Can we make these threads available to the application?
Ans: No we cannot do that.


Miscellaneous Questions:
Q. App server, Web server: What's the difference?
Ans:
The Web server
A Web server handles the HTTP protocol. When the Web server receives an HTTP request, it responds with an HTTP response, such as sending back an HTML page. To process a request, a Web server may respond with a static HTML page or image, send a redirect, or delegate the dynamic response generation to some other program such as CGI scripts, JSPs (JavaServer Pages), servlets, ASPs (Active Server Pages), server-side JavaScripts, or some other server-side technology. Whatever their purpose, such server-side programs generate a response, most often in HTML, for viewing in a Web browser.
When a request comes into the Web server, the Web server simply passes the request to the program best able to handle it. The Web server doesn't provide any functionality beyond simply providing an environment in which the server-side program can execute and pass back the generated responses. The server-side program usually provides for itself such functions as transaction processing, database connectivity, and messaging.
While a Web server may not itself support transactions or database connection pooling, it may employ various strategies for fault tolerance and scalability such as load balancing, caching, and clustering — features oftentimes erroneously assigned as features reserved only for application servers.
The application server
An application server exposes business logic to client applications through various protocols, possibly including HTTP. While a Web server mainly deals with sending HTML for display in a Web browser, an application server provides access to business logic for use by client application programs. The application program can use this logic just as it would call a method on an object (or a function in the procedural world).
Such application server clients can include GUIs (graphical user interface) running on a PC, a Web server, or even other application servers. The information traveling back and forth between an application server and its client is not restricted to simple display markup. Instead, the information is program logic. Since the logic takes the form of data and method calls and not static HTML, the client can employ the exposed business logic however it wants.
In most cases, the server exposes this business logic through a component API, such as the EJB (Enterprise JavaBean) component model found on J2EE (Java 2 Platform, Enterprise Edition) application servers. Moreover, the application server manages its own resources. Such gate-keeping duties include security, transaction processing, resource pooling, and messaging. Like a Web server, an application server may also employ various scalability and fault-tolerance techniques.

Q. What are tlog files?
Tlog files are the files used by weblogic to keep a trace of current transactions.
For example, when an instance is restarted, the tlog files are used to perform the second step of a two-phase commit on a transaction that was in progress. Sometimes, when a transaction is corrupted, you've got to delete these files to get rid of the phantom transaction. There is no need to restart a server to have them recreated.

Q. Why do I get “NoClassDefFound”/“Too Many Open files” messages on Solaris?
Problem: When I am using WebLogic Server on Solaris and try to run my application, I get a “NoClassDefFound” error, although the class causing the error does exist and is in the right directory. In fact, there are other classes in the same directory that are getting loaded. I also get a “Too many open files” error.
Ans: On Solaris, each user account has a certain limited number of file descriptors. You can find out how many file descriptors you have with the limit command in csh.
You can increase file descriptors if you have enough privileges with the ulimit command in the csh. Otherwise, ask your system administrator to increase the file descriptors available to your processes.

Q. How can I speed up connection responses?
Ans: Connection delays are often caused by DNS problems. WebLogic performs a reverse lookup on the hostname from which a new connection is made. If the DNS reverse lookup is not working properly because the connection is coming from a proxy server, it could be responsible for the delay. You might want to work with your system administrator to determine whether DNS and the third-party networking software are working properly. Try writing a simple server program that performs a reverse lookup on any connection made to it. If that lookup is delayed, then you know that the proxy server is the source of the problem.

Q. Differences between Sun JVM and JRockit
Ans: Users cannot explicitly set the PermSize: The memory JRockit uses can conceptually be divided into only two categories: heap and native memory. Unlike Sun JVM,
JRockit has no special memory areas (like MaxPermSize for “permanent generation” allocations). Java objects are created on the heap, and all other memory allocations are done in native memory (which is considered non-heap memory).
Heap + Native memory is approximately = JVM process size.
JRockit uses more native memory than Sun’s JVM, meaning it usually will not support equivalent heap sizes.
JRockit doesn’t have:
- java code interpretation mode, only compilation mode
(Java code compiled into byte code itself compiled into platform specific machine code)
- the notion a permanent generation set via MaxPermSize. This part of the memory is unbounded in the native memory (outside the java heap) and grows as needed
JRockit has:
- More diagnostic tools than other JVMs such as JRA
- Management console with no overhead on performance
- Better performance on Intel architectures than other VMs
- Higher memory usage for better performance
Advantages of JRockit:
What gives it the performance advantage?
Sun’s HotSpot JVM interprets all byte-code and compiles the hotspots (frequently called methods).
- This is its default mode of execution (-Xmixed)
- (–Xint) has to be specified to disable the HotSpot compiler and run in “interpreted-only” mode.
JRockit compiles all byte-code and optimizes the hotspots.
- The compilation gives JRockit its performance advantage and cannot be disabled. All methods that JRockit encounters for the first time in a run are converted into an internal representation of code (“code generation”) which is then compiled.
- The compilation is also what causes JRockit to have a larger footprint, since it uses native memory for code generation, generated code, optimization, garbage collection, thread allocation, etc. JSPs are converted into Java code, which is then compiled. JSPs are generally complex and produce large Java code, so they tend to be especially high consumers of native memory. An application with lots of JSPs can cause JRockit to exhaust the native memory and swamp the virtual process space.
- The optimization can be turned off by specifying the command-line option “-Xnoopt”. Historically, optimization has not entirely been reliable and can cause crashes and performance problems, so, if the customer is willing, –Xnoopt is a standard workaround that is worth trying blindly.

Q. What is the function of T3 in WebLogic Server?
Ans: T3 provides a framework for WebLogic Server messages that support for enhancements.
These enhancements include abbreviations and features, such as object replacement, that work in the context of WebLogic Server clusters and HTTP and other product tunneling.
T3 predates Java Object Serialization and RMI, while closely tracking and leveraging these specifications. T3 is a superset of Java Object. Serialization or RMI; anything you can do in Java Object Serialization and RMI can be done over T3.
T3 is mandated between WebLogic Servers and between programmatic clients and a WebLogic Server cluster. HTTP and IIOP are optional protocols that can be used to communicate between other processes and WebLogic Server. It depends on what you want to do. For example, when you want to communicate between
- A browser and WebLogic Server-use HTTP
- An ORB and WebLogic Server-IIOP.


Q. Can I refresh static components of a deployed application without having to redeploy the entire application?
A. Yes. You can use weblogic.Deployer to specify a component and target a server, using the following syntax:
java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets
server1,server2 -deploy jsps/*.jsp
Q. How do XA and non-XA drivers differ in distributed transactions?
Ans: The differences between XA and non-XA JDBC drivers are:
Atomicity Guarantee. An XA driver implements the XAResource interface and can participate fully in the 2PC protocol driven by the WLS Transaction Manager. This guarantees atomicity of updates across multiple participating resources.
However, a non-XA driver does not implement the XAResource interface and cannot fully participate in the 2PC protocol. When using a non-XA driver in a distributed transaction, WLS implements the XAResource wrapper on behalf of the non-XA driver. If the data source property enableTwoPhaseCommit is set to true, then the WLS XAResource wrapper returns XA_OK when the Transaction Manager invokes the prepare() method. When the Transaction Manager invokes commit() or rollback() during the second phase, the WLS XAResource wrapper delegates the commit() or rollback() call to the non-XA JDBC connection. Any failure during commit() or rollback() results in heuristic exceptions. Application data may be left in an inconsistent state as a result of heuristic failure.
Redirecting Connections. A non-XA driver can be configured to perform updates in the same distributed transaction from more than one process. WLS internally redirects the JDBC calls made from different processes to the same physical JDBC connection in one process. However, when you use a XA driver, no such redirection will be done. Each process will use its own local XA database connection, and the database ensures that all the distributed updates made in the same distributed transaction from different processes will be committed atomically.
Connection Management. Whether you are using the non-XA driver or XA driver in distributed transactions, WLS implements JDBC wrappers that intercept all the JDBC calls and obtains a physical JDBC connection from the connection pool on demand.
When you use a non-XA driver in distributed transactions, in order to ensure that updates made from different processes are committed atomically, WLS associates the same physical JDBC connection with the distributed transaction until it is committed or rolled back. As a result, the number of active distributed transactions using the non-XA connection pool is limited by the maximum capacity of the JDBC connection pool.
When you use an XA driver, the connection management is more scalable. WLS does not hold on to the same physical XA connection until the transaction is committed or rolled back. In fact, in most cases, the XA connection as only held for the duration of a method invocation. WLS JDBC wrappers intercept all JDBC calls and enlist the XAResource associated with the XA connection on demand. When the method invocation returns to the caller, or when it makes another call to another server, WLS delists the XAResource associated with the XA connection.
WLS also returns the XA connection to the connection pool on delistment if there are no open result sets. Also, during commit processing, any XAResource object can be used to commit any number of distributed transactions in parallel. As a result, neither the number of active distributed transactions using the XA connection pool nor the number of concurrent commit/rollbacks is limited by the maximum capacity of the connection pool. Only the number of concurrent database access connections is limited by the maximum capacity of the connection pool.

Q. Can I use more than one non-XA connection pool in distributed transactions?
A. No. Even if you set EnableTwoPhaseCommit=true for both TxDataSources of the connection pools, attempting to use two non-XA connection pools in the same distributed transaction will result in:
"java.sql.SQLException: Connection has already been created in this tx context for pool named. Illegal attempt to create connection from another pool:
when you attempt to get the connection from the second non-XA connection pool.


Q. How does the Connection Reserve Timeout parameter on a datasource work?
A: Weblogic uses this parameter to wait for a connection to become available in the pool. The wait starts when your application requests a connection from a connection pool backing the datasource. If the pool has no connection free, it will wait up to the set timeout to see if a connection returned to the pool by some other thread. If this happens, you will get that one. If no connection has become available within the timeout period you will get an exception.

A better approach is increasing size of the connection pool so that it can accommodate all requests for connections. This parameter has no effect if there are timeouts while testing reserved connections.


 
Q. What is node manager?
Ans: Node Manager is a WebLogic Server utility that enables you to start, shut down, and restart Administration Server and Managed Server instances from a remote location.

Q. What are the modes available in WLST?
Ans: Interactive Mode, Script Mode, and Embedded Mode
Interactively, on the command line— Interactive Mode
In batches, supplied in a file— Script Mode
Embedded in Java code— Embedded Mode

Q. What are the files which get generated when we click on “lock & edit” option of weblogic console?
Ans: 2 files - edit.lok & pending

Q. Where will be user credentials stored?
Ans: LDAP


Q. What are the causes of "java.net.SocketException: Broken pipe"


Ans: These kinds of errors are most common in Web Applications.    


Scenario 1:

One very common scenario that would cause broken pipe errors is that, you have a browser client submitting a request to the server, while the server is writing the response back to the browser client; the user hits the Stop button or simply kills the browser.

Since the server is still writing the response back, the "pipe" to the client is gone, resulting in broken pipe. This in itself is not an error, but from the specifications mandates the server to regard this as an error, hence WebLogic has to report this and log this to the server log.


This is quite common, since at times, the user's bandwidth might become low, or server takes a bit of time in processing the request, or simply that the result to be written back to the response from the server is large. Lot of times, users does not want to wait, and click the Stop button in browser.


This error indicates a communication problem between the client and the server. The socket from the server back to the client is dead.


This can occur from events such as:


a) The browser (client) breaks the socket connection by hitting the stop button, or the user issuing another HTTP request prior to the first request finishing.


b) Network congestion, latency, etc.


c) Firewalls, timeouts, etc.    


The error could happen if the number of files that can be open has been reached.


So, these errors can be indicative of user generated errors beyond your control, network -- internet problems beyond your control, or your own network configuration that is causing problems.


For this issue you can also test your production network topology and configuration vs. your test configuration, as the later is usually less complex and restrictive.


Scenario 2: A broken pipe also occurs because your JDBC connection has expired for some reason, either the DB has been restarted or the connection has timed out. Sometimes, improper driver will also cause this problem.