Skip to main content

Refresher Before Interview




Question : What are features of JDK 1.7?
Answer :
1) Type inference :Before JDK 1.7 introduce a new operator <<, known as diamond operator to making type inference available for constructors as well. Prior to Java 7, type inference is only available for methods, and Joshua Bloch has rightly predicted in Effective Java 2nd Edition, it’s now available for constructor as well.
Prior JDK 7, you type more to specify types on both left and right hand side of object creation expression, but now it only needed on left hand side, as shown in below example.

Prior JDK 7
Map<String, List<String>> employeeRecords =  new HashMap<String, List<String>>();
List<Integer> primes = new ArrayList<Integer>();

In JDK 7
Map<String, List<String>> employeeRecords =  new HashMap<>();
List<Integer> primes = new ArrayList<>();

So you have to type less in Java 7, while working with Collections, where we heavily use Generics. See here for more detailed information on diamond operator in Java.

2) String in Switch : Before JDK 7, only integral types can be used as selector for switch-case statement. In JDK 7, you can use a String object as the selector. For example,
String state = "NEW";

switch (day) {
   case "NEW": System.out.println("Order is in NEW state"); break;
   case "CANCELED": System.out.println("Order is Cancelled"); break;
   case "REPLACE": System.out.println("Order is replaced successfully"); break;
   case "FILLED": System.out.println("Order is filled"); break;
   default: System.out.println("Invalid");

}
equals() and hashcode() method from java.lang.String is used in comparison, which is case-sensitive. Benefit of using String in switch is that, Java compiler can generate more efficient code than using nested if-then-else statement. See here for more detailed information of how to use String on Switch case statement.


3) Automatic Resource Management:  Before JDK 7, we need to use a finally block, to ensure that a resource is closed regardless of whether the try statement completes normally or abruptly, for example while reading files and streams, we need to close them into finally block, which result in lots of boiler plate and messy code, as shown below :
public static void main(String args[]) {
        FileInputStream fin = null;
        BufferedReader br = null;
        try {
            fin = new FileInputStream("info.xml");
            br = new BufferedReader(new InputStreamReader(fin));
            if (br.ready()) {
                String line1 = br.readLine();
                System.out.println(line1);
            }
        } catch (FileNotFoundException ex) {
            System.out.println("Info.xml is not found");
        } catch (IOException ex) {
            System.out.println("Can't read the file");
        } finally {
            try {
                if (fin != null) fin.close();
                if (br != null) br.close();
            } catch (IOException ie) {
                System.out.println("Failed to close files");
            }
        }
    }

Look at this code, how many lines of boiler codes?

Now in Java 7, you can use try-with-resource feature to automatically close resources, which implements AutoClosable and Closeable interface e.g. Streams, Files, Socket handles, database connections etc. JDK 7 introduces a try-with-resources statement, which ensures that each of the resources in try(resources) is closed at the end of the statement by calling close() method of AutoClosable. Now same example in Java 7 will look like below, a much concise and cleaner code :

public static void main(String args[]) {
       try (FileInputStream fin = new FileInputStream("info.xml");
  BufferedReader br = new BufferedReader(new InputStreamReader(fin));) {
  if (br.ready()) {
   String line1 = br.readLine();
   System.out.println(line1);
  }
 } catch (FileNotFoundException ex) {
  System.out.println("Info.xml is not found");
 } catch (IOException ex) {
  System.out.println("Can't read the file");
 }
}
Since Java is taking care of closing opened resources including files and streams, may be no more leaking of file descriptors and probably an end to file descriptor error. Even JDBC 4.1 is retrofitted as AutoClosable too.

4) Fork Join Framework: The fork/join framework is an implementation of the ExecutorService interface that allows you to take advantage of multiple processors available in modern servers. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application. As with any ExecutorService implementation, the fork/join framework distributes tasks to worker threads in a thread pool. The fork join framework is distinct because it uses a work-stealing algorithm, which is very different than producer consumer algorithm. Worker threads that run out of things to do can steal tasks from other threads that are still busy. The centre of the fork/join framework is the ForkJoinPool class, an extension of the AbstractExecutorService class. ForkJoinPool implements the core work-stealing algorithm and can execute ForkJoinTask processes. You can wrap code in a ForkJoinTask subclass like RecursiveTask (which can return a result) or RecursiveAction. See here for some more information on fork join framework in Java.


5) Underscore in Numeric literals :  In JDK 7, you could insert underscore(s) '_' in between the digits in an numeric literals (integral and floating-point literals) to improve readability. This is especially valuable for people who uses large numbers in source files, may be useful in finance and computing domains. For example,

int billion = 1_000_000_000;  // 10^9
long creditCardNumber =  1234_4567_8901_2345L; //16 digit number
long ssn = 777_99_8888L;
double pi = 3.1415_9265;
float  pif = 3.14_15_92_65f;

You can put underscore at convenient points to make it more readable, for examples for large amounts putting underscore between three digits make sense, and for credit card numbers, which are 16 digit long, putting underscore after 4th digit make sense, as they are printed in cards. By the way remember that you cannot put underscore, just after decimal number or at the beginning or at the end of number. For example, following numeric literals are invalid, because of wrong placement of underscore:

double pi = 3._1415_9265; // underscore just after decimal point
long creditcardNum = 1234_4567_8901_2345_L; //underscore at the end of number
long ssn = _777_99_8888L; //undersocre at the beginning

See my post about how to use underscore on numeric literals for more information and use case.

6) Catching Multiple Exception Type in Single Catch Block: In JDK 7, a single catch block can handle more than one exception types.

For example, before JDK 7, you need two catch blocks to catch two exception types although both perform identical task:

try {

   ......

} catch(ClassNotFoundException ex) {
   ex.printStackTrace();
} catch(SQLException ex) {
   ex.printStackTrace();
}

In JDK 7, you could use one single catch block, with exception types separated by '|'.

try {

   ......

} catch(ClassNotFoundException|SQLException ex) {

   ex.printStackTrace();

}

By the way, just remember that Alternatives in a multi-catch statement cannot be related by sub classing. For example a multi-catch statement like below will throw compile time error :

try {

   ......

} catch (FileNotFoundException | IOException ex) {

   ex.printStackTrace();

}

Alternatives in a multi-catch statement cannot be related by sub classing, it will throw error at compile time :
java.io.FileNotFoundException is a subclass of alternative java.io.IOException
        at Test.main(Test.java:18)

see here to learn more about improved exception handling in Java SE 7.


7) Binary Literals with prefix "0b": In JDK 7, you can express literal values in binary with prefix '0b' (or '0B') for integral types (byte, short, int and long), similar to C/C++ language. Before JDK 7, you can only use octal values (with prefix '0') or hexadecimal values (with prefix '0x' or '0X').

int mask = 0b01010000101;

or even better

int binary = 0B0101_0000_1010_0010_1101_0000_1010_0010;


8) Java NIO 2.0 : Java SE 7 introduced java.nio.file package and its related package, java.nio.file.attribute, provide comprehensive support for file I/O and for accessing the default file system. It also introduced the Path class which allow you to represent any path in operating system. New File system API complements older one and provides several useful method checking, deleting, copying, and moving files. for example, now you can check if a file is hidden in Java. You can also create symbolic and hard links from Java code.  JDK 7 new file API is also capable of searching for files using wild cards. You also get support to watch a directory for changes. I would recommend to check Java doc of new file package to learn more about this interesting useful feature.


9) G1 Garbage Collector :  JDK 7 introduced a new Garbage Collector known as G1 Garbage Collection, which is short form of garbage first. G1 garbage collector performs clean-up where there is most garbage. To achieve this it split Java heap memory into multiple regions as opposed to 3 regions in the prior to Java 7 version (new, old and permgen space). It's said that G1 is quite predictable and provides greater through put for memory intensive applications.


10) More Precise rethrowing of Exception : The Java SE 7 compiler performs more precise analysis of re-thrown exceptions than earlier releases of Java SE. This enables you to specify more specific exception types in the throws clause of a method declaration. before JDK 7, re-throwing an exception was treated as throwing the type of the catch parameter. For example, if your try block can throw ParseException as well as IOException. In order to catch all exceptions and rethrow them, you would have to catch Exception and declare your method as throwing an Exception. This is sort of obscure non-precise throw, because you are throwing a general Exception type (instead of specific ones) and statements calling your method need to catch this general Exception. This will be more clear by seeing following example of exception handling in code prior to Java 1.7

public void obscure() throws Exception{
    try {
        new FileInputStream("abc.txt").read();
        new SimpleDateFormat("ddMMyyyy").parse("12-03-2014");      
    } catch (Exception ex) {
        System.out.println("Caught exception: " + ex.getMessage());
        throw ex;
    }
}

From JDK 7 onwards you can be more precise while declaring type of Exception in throws clause of any method. This precision in determining which Exception is thrown from the fact that, If you re-throw an exception from a catch block, you are actually throwing an exception type which:

   1) your try block can throw,
   2) has not handled by any previous catch block, and
   3) is a subtype of one of the Exception declared as catch parameter

This leads to improved checking for re-thrown exceptions. You can be more precise about the exceptions being thrown from the method and you can handle them a lot better at client side, as shown in following example :

public void precise() throws ParseException, IOException {
    try {
        new FileInputStream("abc.txt").read();
        new SimpleDateFormat("ddMMyyyy").parse("12-03-2014");      
    } catch (Exception ex) {
        System.out.println("Caught exception: " + ex.getMessage());
        throw ex;
    }
}
The Java SE 7 compiler allows you to specify the exception types ParseException and IOException in the throws clause in the preciese() method declaration because you can re-throw an exception that is a super-type of any of the types declared in the throws, we are throwing java.lang.Exception, which is super class of all checked Exception. Also in some places you will see final keyword with catch parameter, but that is not mandatory any more.

That's all about what you can revise in JDK 7. All these new features of Java 7 are very helpful in your goal towards clean code and developer productivity. With lambda expression introduced in Java 8, this goal to cleaner code in Java has reached another milestone. Let me know, if you think I have left out any useful feature of Java 1.7, which you think should be here.

P.S. If you love books then you may like Java 7 New features Cookbook from Packet Publication as well.


Question : What are features of JDK 1.8?
Answer : 
1. Lambda expressions : Even if we really didn’t want to go mainstream here, there’s little doubt that from a developer’s perspective, the most dominant feature of Java 8 is the new support for Lambda expressions. This addition to the language brings Java to the forefront of functional programming, right there with other functional JVM-based languages such as Scala and Clojure.

We’ve previously looked into how Java implemented Lambda expressions, and how it compared to the approach taken by Scala. From Java’s perspective this is by far one of the biggest additions to the language in the past decade.


At minimum, it’s recommended you become familiar with the Lambda syntax, especially as it relates to array and collection operations, where Lambdas have been tightly integrated into the core language libraries. It is highly likely that you’ll start seeing more and more code like the snippet below in both 3rd party and within your organization’s code.

Map<Person.Sex, List<Person>> byGender =
roster.stream().collect(Collectors.groupingBy(Person::getGender));

A pretty efficient way of grouping a collection by the value of a specific class field.

2. Parallel operations : With the addition of Lambda expressions to arrays operations, Java introduced a key concept into the language of internal iteration. Essentially as developers we’re used to use loop operations as one of the most basic programming idioms, right up there with if and else.

The introduction of Lambda expressions turned that paradigm around, with the actual iteration over a collection on which a Lambda function is applied now carried out by the core library itself (i.e. internal iteration).

You can think of this as an extension of iterators where the actual operation of extracting the next item from a collection on which to operate is carried out by an iterator. An exciting possibility opened by this design pattern is to enable operations carried out on long arrays such as sorting, filtering and mapping to be carried out in parallel by the framework. When dealing with server code that’s processing lengthy collections on a continuous basis, this can lead to major throughput improvements with relatively little work from your end.

Here’s the same snippet as above, but using the framework’s new parallel processing capabilities –

ConcurrentMap<Person.Sex, List<Person>> byGender =
roster.parallelStream().collect(
Collectors.groupingByConcurrent(Person::getGender));

* It’s a fairly small change that’s required to make this algorithm run on multiple threads.

3. Java + JavaScript : Java 8 is looking to right one of its biggest historical wrongs – the ever growing distance between Java and JavaScript, one that has only increased in the past few years. With this new release, Java 8 is introducing a completely new JVM JavaScript engine – Nashorn. This engine makes unique use of some of the new features introduced in Java 7 such as invokeDynamic to provide JVM-level speed to JavaScript execution right there with the likes of V8 and SpiderMonkey.

This means that the next time you’re looking to integrate JS into your backend, instead of setting up a node.js instance, you can simply use the JVM to execute the code. The added bonus here is the ability to have seamless interoperability between your Java and JavaScript code in-process, without having to use various IPC/RPC methods to bridge the gap.

4. New date / time APIs : The complexity of the current native Java library API has been a cause of pain for Java developers for many years. Joda time has been filling this vacuum for years now, and with Java 8. An immediate question that arose early on was why didn’t Java 8 adopt Joda as its native time framework. Due to what was perceived as a design flaw in Joda, Java 8 implemented its own new date / time API from scratch. The good news is that unlike Calendar.getInstance(), the new APIs were designed with simplicity in mind, and clear operations to operate on manipulated values in both human readable and machine time formats.

5. Concurrent accumulators : One of the most common scenarios in concurrent programming is updating of numeric counters accessed by multiple threads. There have been many idioms to do this over the years, starting from synchronized blocks (which introduce a high level of contention), to read/write locks to AtomicInteger(s). While the last ones are more efficient, as they rely directly on processor CAS instructions, they require a higher degree of familiarity to implement the
required semantics correctly.

With Java 8 this problem is solved at the framework level with new concurrent accumulator classes that enable you to very efficiently increase / decrease the value of a counter in a thread safe manner. This is really a case where it’s not a question of taste, or preference – using these new classes in your code is really a no-brainer.

Are there any other language features you think every developers should know about? Add them in the comments section.


What is change in garbage collection in jdk 1.7/1.8.
Answer : The collector splits the heap up into fixed-size regions and tracks the live data in those regions. It keeps a set of pointers — the "remembered set" — into and out of the region. When a GC is deemed necessary, it collects the regions with less live data first (hence, "garbage first"). Often, this can mean collecting an entire region in one step: if the number of pointers into a region is zero, then it doesn't need to do a mark or sweep of that region.

For each region, it tracks various metrics that describe how long it will take to collect them. You can give it a soft real-time constraint about pause times, and it then tries to collect as much garbage as it can in that constrained time.


Question : What are the 4 Java Garbage Collectors – How the Wrong Choice Dramatically Impacts Performance?
Answer : The year is 2014 and there are two things that still remain a mystery to most developers – Garbage collection and understanding the opposite sex. Since I don’t know much about the latter, I thought I’d take a whack at the former, especially as this is an area that has seen some major changes and improvements with Java 8, especially with the removal of the PermGen and some new and exciting optimizations (more on this towards the end).

When we speak about garbage collection, the vast majority of us know the concept and employ it in our everyday programming. Even so, there’s much about it we don’t understand, and that’s when things get painful. One of the biggest misconceptions about the JVM is that it has one garbage collector, where in fact it provides four different ones, each with its own unique advantages and disadvantages. The choice of which one to use isn’t automatic and lies on your shoulders and the differences in throughput and application pauses can be dramatic.


What’s common about these four garbage collection algorithms is that they are generational, which means they split the managed heap into different segments, using the age-old assumptions that most objects in the heap are short lived and should be recycled quickly. As this too is a well-covered area, I’m going to jump directly into the different algorithms, along with their pros and their cons.

1. The Serial Collector : The serial collector is the simplest one, and the one you probably won’t be using, as it’s mainly designed for single-threaded environments (e.g. 32 bit or Windows) and for small heaps. This collector freezes all application threads whenever it’s working, which disqualifies it for all intents and purposes from being used in a server environment.

How to use it: You can use it by turning on the -XX:+UseSerialGC JVM argument,

2. The Parallel / Throughput collector :  Next off is the Parallel collector. This is the JVM’s default collector. Much like its name, its biggest advantage is that is uses multiple threads to scan through and compact the heap. The downside to the parallel collector is that it will stop application threads when performing either a minor or full GC collection. The parallel collector is best suited for apps that can tolerate application pauses and are trying to optimize for lower CPU overhead caused by the collector.

3. The CMS Collector: Following up on the parallel collector is the CMS collector (“concurrent-mark-sweep”). This algorithm uses multiple threads (“concurrent”) to scan through the heap (“mark”) for unused objects that can be recycled (“sweep”). This algorithm will enter “stop the world” (STW) mode in two cases: when initializing the initial marking of roots (objects in the old generation that are reachable from thread entry points or static variables) and when the application has changed the state of the heap while the algorithm was running concurrently, forcing it to go back and do some final touches to make sure it has the right objects marked.

The biggest concern when using this collector is encountering promotion failures which are instances where a race condition occurs between collecting the young and old generations. If the collector needs to promote young objects to the old generation, but hasn’t had enough time to make space clear it,  it will have to do so first which will result in a full STW collection – the very thing this CMS collector was meant to prevent. To make sure this doesn’t happen you would either increase the size of the old generation (or the entire heap for that matter) or allocate more background threads to the collector for him to compete with the rate of object allocation.

Another downside to this algorithm in comparison to the parallel collector is that it uses more CPU in order to provide the application with higher levels of continuous throughput, by using multiple threads to perform scanning and collection. For most long-running server applications which are adverse to application freezes, that’s usually a good trade off to make. Even so, this algorithm is not on by default. You have to specify XX:+USeParNewGC to actually enable it. If you’re willing to allocate more CPU resources to avoid application pauses this is the collector you’ll probably want to use, assuming that your heap is less than 4Gb in size.  However, if it’s greater than 4GB, you’ll probably want to use the last algorithm – the G1 Collector.

4. The G1 Collector: The Garbage first collector (G1) introduced in JDK 7 update 4 was designed to better support heaps larger than 4GB. The G1 collector utilizes multiple background threads to scan through the heap that it divides into regions, spanning from 1MB to 32MB (depending on the size of your heap). G1 collector is geared towards scanning those regions that contain the most garbage objects first, giving it its name (Garbage first). This collector is turned on using the –XX:+UseG1GC flag.

This strategy the chance of the heap being depleted before background threads have finished scanning for unused objects, in which case the collector will have to stop the application which will result in a STW collection. The G1 also has another advantage that is that it compacts the heap on-the-go, something the CMS collector only does during full STW collections.

Large heaps have been a fairly contentious area over the past few years with many developers moving away from the single JVM per machine model to more micro-service, componentized architectures with multiple JVMs per machine. This has been driven by many factors including the desire to isolate different application parts, simplifying deployment and avoiding the cost which would usually come with reloading application classes into memory (something which has actually been improved in Java 8).

Even so, one of the biggest drivers to do this when it comes to the JVM stems from the desire to avoid those long “stop the world” pauses (which can take many seconds in a large collection) that occur with large heaps. This has also been accelerated by container technologies like Docker that enable you to deploy multiple apps on the same physical machine with relative ease.

Java 8 and the G1 Collector Another beautiful optimization which was just out with Java 8 update 20 for is the G1 Collector String de duplication. Since strings (and their internal char[] arrays) takes much of our heap, a new optimization has been made that enables the G1 collector to identify strings which are duplicated more than once across your heap and correct them to point into the same internal char[] array, to avoid multiple copies of the same string from residing inefficiently within the heap. You can use the -XX:+UseStringDeduplicationJVM argument to try this out.

Question : What is Java 8 and PermGen?
Answer : One of the biggest changes made in Java 8 was removing the permgen part of the heap that was traditionally allocated for class meta-data, interned strings and static variables. This would traditionally require developers with applications that would load significant amount of classes (something common with apps using enterprise containers) to optimize and tune for this portion of the heap specifically. This has over the years become the source of many OutOfMemory exceptions, so having the JVM (mostly) take care if it is a very nice addition. Even so, that in itself will probably not reduce the tide of developers decoupling their apps into multiple JVMs.

Each of these collectors is configured and tuned differently with a slew of toggles and switches, each with the potential to increase or decrease throughput, all based on the specific behavior of your app. We’ll delve into the key strategies of configuring each of these in our next posts.

In the meanwhile, what are the things you’re most interested in learning about regarding the differences between the different collectors? Hit me up in the comments section


Question : Java 8 From PermGen to Metaspace?
Answer : The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.

PermGen space situation
This memory space is completely removed.
The PermSize and MaxPermSize JVM arguments are ignored and a warning is issued if present at start-up.
Metaspace memory allocation model
Most allocations for the class metadata are now allocated out of native memory.
The klasses that were used to describe class metadata have been removed.
Metaspace capacity
By default class metadata allocation is limited by the amount of available native memory (capacity will of course depend if you use a 32-bit JVM vs. 64-bit along with OS virtual memory availability).

The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore…not so fast. While this change is invisible by default, we will show you next that you will still need to worry about the class metadata memory footprint. Please also keep in mind that this new feature does not magically eliminate class and classloader memory leaks. You will need to track down these problems using a different approach and by learning the new naming convention.

A new flag is available (MaxMetaspaceSize), allowing you to limit the amount of native memory used for class metadata. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime.

Metaspace garbage collection
Garbage collection of the dead classes and classloaders is triggered once the class metadata usage reaches the “MaxMetaspaceSize”.

Proper monitoring & tuning of the Metaspace will obviously be required in order to limit the frequency or delay of such garbage collections. Excessive Metaspace garbage collections may be a symptom of classes, classloaders memory leak or inadequate sizing for your application.
Java heap space impact

Some miscellaneous data has been moved to the Java heap space. This means you may observe an increase of the Java heap space following a future JDK 8 upgrade.
Metaspace monitoring

Metaspace usage is available from the HotSpot 1.8 verbose GC log output.
Jstat & JVisualVM have not been updated at this point based on our testing with b75 and the old PermGen space references are still present.

Question : What is Input stream and reader?
Answer : Stream classes are byte-oriented classes, that mean all InputStream classes (Buffered and non-buffered) read data byte by byte from stream and all OutputStream(Buffered and non-buffered) classes writes data byte by byte to the stream. Stream classes are useful when you have small data or if you are dealing with binary files like images.

On the other handReader/Writer are character based classes. These classes read or write one character at time from or into stream. These classes extends either java.io.Reader (all character input classes) or java.io.Writer (all character output classes). These classes are useful if you are dealing with text file or other textual stream. These classes are also Buffered and Non-Buffered.

What are Thread states?
Answer : NEW. A thread that has not yet started is in this state.
RUNNABLE. A thread executing in the Java virtual machine is in this state.
BLOCKED. A thread that is blocked waiting for a monitor lock is in this state.
WAITING. ...
TIMED_WAITING. ...
TERMINATED.

Question : What is diff between wait vs sleep ?
Answer : sleep() is a method which is used to hold the process for few seconds or the time you wanted but in case of wait() method thread goes in waiting state and it won’t come back automatically until we call the notify() or notifyAll().

The major difference is that wait() releases the lock or monitor while sleep() doesn’t releases any lock or monitor while waiting. Wait is used for inter-thread communication while sleep is used to introduce pause on execution, generally.

Thread.sleep() sends the current thread into the “Not Runnable” state for some amount of time. The thread keeps the monitors it has acquired — i.e. if the thread is currently in a synchronized block or method no other thread can enter this block or method. If another thread calls t.interrupt() it will wake up the sleeping thread. Note that sleep is a static method, which means that it always affects the current thread (the one that is executing the sleep method). A common mistake is to call t.sleep() where t is a different thread; even then, it is the current thread that will sleep, not the t thread.

object.wait() sends the current thread into the “Not Runnable” state, like sleep(), but with a twist. Wait is called on an object, not a thread; we call this object the “lock object.” Before lock.wait() is called, the current thread must synchronize on the lock object; wait() then releases this lock, and adds the thread to the “wait list” associated with the lock. Later, another thread can synchronize on the same lock object and call lock.notify(). This wakes up the original, waiting thread. Basically, wait()/notify() is like sleep()/interrupt(), only the active thread does not need a direct pointer to the sleeping thread, but only to the shared lock object.

synchronized(LOCK) {
    Thread.sleep(1000); // LOCK is held
}

synchronized(LOCK) {
    LOCK.wait(); // LOCK is not held
}
Let categorize all above points :

Call on:
    wait(): Call on an object; current thread must synchronize on the lock object.
    sleep(): Call on a Thread; always currently executing thread.
Synchronized:

    wait(): when synchronized multiple threads access same Object one by one.
    sleep(): when synchronized multiple threads wait for sleep over of sleeping thread.
Hold lock:

    wait(): release the lock for other objects to have chance to execute.
    sleep(): keep lock for at least t times if timeout specified or somebody interrupt.
Wake-up condition:

    wait(): until call notify(), notifyAll() from object
    sleep(): until at least time expire or call interrupt().
Usage:

    sleep(): for time-synchronization and;
    wait(): for multi-thread-synchronization.



Question : What is deadlock and how can you avoid the deadlock?
Answer : Though this could have many answers , my version is first I would look the code if I see nested synchronized block or calling one synchronized method from other or trying to get lock on different object then there is good chance of deadlock if developer is not very careful.

Other way is to find it when you actually get locked while running the application , try to take thread dump , in Linux you can do this by command "kill -3" , this will print status of all the thread in application log file and you can see which thread is locked on which object.

Once you answer this , they may ask you to write code which will result in deadlock ?
here is one of my version

package com.java8;

public class ThreadDeadLock {
       String s1 = "vikash";
       String s2 = "jay";
       public static void main(String[] args) {
              ThreadDeadLock helloWorld = new ThreadDeadLock();
              ThreadClassOne classOne = helloWorld.new ThreadClassOne(helloWorld);
              ThreadClassTwo classTwo = helloWorld.new ThreadClassTwo(helloWorld);
              classOne.start();
              classTwo.start();
       }
      
       class ThreadClassOne extends Thread{
                     ThreadDeadLock helloWorld;
                    
                     public ThreadClassOne(ThreadDeadLock helloWorld) {
                           this.helloWorld = helloWorld;
                     }

                     public void run() {
                           helloWorld.method1();
                     }
       }
       class ThreadClassTwo extends Thread{
              ThreadDeadLock helloWorld;
             
              public ThreadClassTwo(ThreadDeadLock helloWorld) {
                     this.helloWorld = helloWorld;
              }

              public void run() {
                     helloWorld.method2();
              }
}
      
       public void  method1(){
              synchronized (s1) {
                     System.out.println("Acquired lock on s2");
                     try{Thread.sleep(10);}catch(Exception e){}finally {}
                     synchronized (s2) {
                           System.out.println("Acquired lock on s1");
                     }
              }
       }
      
       public void  method2(){
              synchronized (s2) {
                     System.out.println("Acquired lock on s2");
                     try{Thread.sleep(10);}catch(Exception e){}finally {}
                     synchronized (s1) {
                           System.out.println("Acquired lock on s2");
                     }
              }
       }
}


If method1() and method2() both will be called by two or many threads , there is a good chance of deadlock because if thread 1 acquires lock on Sting object while executing method1() and thread 2 acquires lock on Integer object while executing method2() both will be waiting for each other to release lock on Integer and String to proceed further which will never happen.

This diagram exactly demonstrate our program, where one thread holds lock on one object and waiting for other object lock which is held by other thread.

How to avoid deadlock in Java?
Answer : Now interviewer comes to final part, one of the most important in my view; How do you fix deadlock? or How to avoid deadlock in Java?

If you have looked above code carefully then you may have figured out that real reason for deadlock is not multiple threads but the way they are requesting lock , if you provide an ordered access then problem will be resolved , here is my fixed version, which avoids deadlock by avoiding circular wait with no preemption.

package com.java8;

public class ThreadDeadLock {
       String s1 = "vikash";
       String s2 = "jay";
       public static void main(String[] args) {
              ThreadDeadLock helloWorld = new ThreadDeadLock();
              ThreadClassOne classOne = helloWorld.new ThreadClassOne(helloWorld);
              ThreadClassTwo classTwo = helloWorld.new ThreadClassTwo(helloWorld);
              classOne.start();
              classTwo.start();
       }
      
       class ThreadClassOne extends Thread{
                     ThreadDeadLock helloWorld;
                    
                     public ThreadClassOne(ThreadDeadLock helloWorld) {
                           this.helloWorld = helloWorld;
                     }

                     public void run() {
                           helloWorld.method1();
                     }
       }
       class ThreadClassTwo extends Thread{
              ThreadDeadLock helloWorld;
             
              public ThreadClassTwo(ThreadDeadLock helloWorld) {
                     this.helloWorld = helloWorld;
              }

              public void run() {
                     helloWorld.method2();
              }
}
      
       public void  method1(){
              synchronized (s1) {
                     System.out.println("Acquired lock on s1");
                     try{Thread.sleep(10);}catch(Exception e){}finally {}
                     synchronized (s2) {
                           System.out.println("Acquired lock on s2");
                     }
              }
       }
      
       public void  method2(){
              synchronized (s1) {
                     System.out.println("Acquired lock on s1");
                     try{Thread.sleep(10);}catch(Exception e){}finally {}
                     synchronized (s2) {
                           System.out.println("Acquired lock on s2");
                     }
              }
       }
}


Now there would not be any deadlock because both methods are accessing lock on Integer and String class literal in same order. So, if thread A acquires lock on Integer object , thread B will not proceed until thread A releases Integer lock, same way thread A will not be blocked even if thread B holds String lock because now thread B will not expect thread A to release Integer lock to proceed further.


Question : Can you name the three executor services?
Answer :
ExecutorService executorService1 = Executors.newSingleThreadExecutor();
ExecutorService executorService2 = Executors.newFixedThreadPool(10);
ExecutorService executorService3 = Executors.newScheduledThreadPool(10);


Question :  Give the example of singleThreadExecutor?
Answer : Find it below,
public class ExecutorServiceDemo {
       public static void main(String[] args) {
              ExecutorService executorService = Executors.newSingleThreadExecutor();
              executorService.execute(new Runnable() {
                     @Override
                     public void run() {
                           System.out.println("Basic Execution service demo");
                     }
              });          
              executorService.shutdown();
       }
}


Question : What is singleThreadExecutor?
Answer : Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.) Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Unlike the otherwise equivalent newFixedThreadPool(1) the returned executor is guaranteed not to be reconfigurable to use additional threads.

Question : What are the common method implemented by all the executor service?
Answer :
execute(Runnable) 
submit(Runnable) 
submit(Callable) 
invokeAny(...) 
invokeAll(...)

Question : What is Difference Between newSingleThreadExecutor and newFixedThreadPool(1)
Answer : 

Similirity
newSingleThreadExecutor returns ExecutorService with single thread worker and newFixedThreadPool(1) also returns ExecutorService with single thread worker. In both case if thread terminates, new thread will be created.

Difference
ExecutorService returned by newSingleThreadExecutor can never increase its thread pool size more than one. ExecutorService returned by newFixedThreadPool(1) can increase its thread pool size more than one at run time by setCorePoolSize of the class ThreadPoolExecutor.

Question : Give the example Executor service by creating all the executor service types.
Answer :
public class ExecutorServiceDemo {
       public static void main(String[] args) {
             ExecutorService executorServiceSingleThread = Executors.newSingleThreadExecutor();
              ExecutorService executorServiceFixedThreadPool = Executors.newFixedThreadPool(10);
              ExecutorService executorServiceScheduledThreadPool = Executors.newScheduledThreadPool(10);
             
              executorServiceSingleThread.execute(new Runnable() {
                     @Override
                     public void run() {
                           System.out.println("- executorServiceSingleThread - Basic Execution service demo");
                     }
              });          
             
              executorServiceFixedThreadPool.execute(new Runnable() {
                     @Override
                     public void run() {
                           System.out.println("- executorServiceFixedThreadPool - Basic Execution service demo");
                     }
              });          
             
              executorServiceScheduledThreadPool.execute(new Runnable() {
                     @Override
                     public void run() {
                           System.out.println("- executorServiceScheduledThreadPool - Basic Execution service demo");
                     }
              });          

              executorServiceFixedThreadPool.shutdown();
              executorServiceSingleThread.shutdown();
              executorServiceScheduledThreadPool.shutdown();
       }
}


Question : What is volatile?
Answer : The Java volatile keyword is used to mark a Java variable as "being stored in main memory". More precisely that means, that every read of a volatile variable will be read from the computer's main memory, and not from the CPU cache, and that every write to a volatile variable will be written to main memory, and not just to the CPU cache.
In a multithreaded application where the threads operate on non-volatile variables, each thread may copy variables from main memory into a CPU cache while working on them, for performance reasons. If your computer contains more than one CPU, each thread may run on a different CPU. That means, that each thread may copy the variables into the CPU cache of different CPUs.
With non-volatile variables there are no guarantees about when the Java Virtual Machine (JVM) reads data from main memory into CPU caches, or writes data from CPU caches to main memory. This can cause several problems

Imagine a situation in which two or more threads have access to a shared object which contains a counter variable.
Imagine too, that only Thread 1 increments the counter variable, but both Thread 1 and Thread 2 may read the counter variable from time to time.

If the counter variable is not declared volatile there is no guarantee about when the value of the counter variable is written from the CPU cache back to main memory. This means, that the counter variable value in the CPU cache may not be the same as in main memory.
The problem with threads not seeing the latest value of a variable because it has not yet been written back to main memory by another thread, is called a "visibility" problem. The updates of one thread are not visible to other threads.

By declaring the counter variable volatile all writes to the counter variable will be written back to main memory immediately. Also, all reads of the counter variable will be read directly from main memory.
Declaring a variable volatile thus guarantees the visibility for other threads of writes to that variable.

Question : What are Performance Considerations of volatile?
Answer : Reading and writing of volatile variables causes the variable to be read or written to main memory. Reading from and writing to main memory is more expensive than accessing the CPU cache. Accessing volatile variables also prevent instruction reordering which is a normal performance enhancement technique. Thus, you should only use volatile variables when you really need to enforce visibility of variables.

Details : http://tutorials.jenkov.com/java-concurrency/volatile.html

Question : Give a basic example of serialiable
Answer :
public static void main(String[] args) throws Exception{
       SerializableDemo demo = new SerializableDemo(1, "vikash", "Bokar", 10000);
       //Wrting the object in the file       
       File file = new File("C:/temp/obj.ser");
       FileOutputStream stream = new FileOutputStream(file);
       ObjectOutputStream objectOutputStream = new ObjectOutputStream(stream);
       objectOutputStream.writeObject(demo);
       //reading the object from the file 
       FileInputStream fileInputStream = new FileInputStream(file);
       ObjectInputStream objectInputStream = new ObjectInputStream(fileInputStream);
       SerializableDemo demo2 = (SerializableDemo)     objectInputStream.readObject();
       System.out.println(demo2);
             
       }

       public SerializableDemo(int id, String name, String address, int salary) {
              super();
              this.id = id;
              this.name = name;
              this.address = address;
              this.salary = salary;
       }

       @Override
       public String toString() {
              return "SerializableDemo [id=" + id + ", name=" + name + ", address=" +                       address + ", salary=" + salary + "]";
       }
      

Question : Example code of Externalizable.
Answer :
public class SerializableDemo implements Externalizable{
       int id;       String name ;String address;int salary;
       private static final long serialVersionUID = 1L;

   public static void main(String[] args) throws Exception{
       SerializableDemo demo = new SerializableDemo(1, "vikash", "Bokar", 10000);
             
       File file = new File("C:/temp/obj.ser");
       FileOutputStream stream = new FileOutputStream(file);
       ObjectOutputStream objectOutputStream = new ObjectOutputStream(stream);
       objectOutputStream.writeObject(demo);
      
       FileInputStream fileInputStream = new FileInputStream(file);
       ObjectInputStream objectInputStream = new ObjectInputStream(fileInputStream);
       SerializableDemo demo2 = (SerializableDemo)     objectInputStream.readObject();
       System.out.println(demo2);
             
     }
     public SerializableDemo(){
             
     }
     public SerializableDemo(int id, String name, String address, int salary) {
             super();
             this.id = id;
             this.name = name;
             this.address = address;
             this.salary = salary;
     }

       @Override
       public String toString() {
              return "SerializableDemo [id=" + id + ", name=" + name + ", address=" +                       address + ", salary=" + salary + "]";
       }

       @Override
       public void writeExternal(ObjectOutput out) throws IOException {
           System.out.println("write external");
           out.writeInt(id);
           out.writeObject(name);
           out.writeObject(address);
           out.writeInt(salary);
       }

       @Override
 public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
           System.out.println("read external");
           id = in.readInt();
           name = (String) in.readObject();
           address = (String) in.readObject();
           salary = in.readInt();
       }     
}     



What is prototype explain with the example?
Answer :
<!DOCTYPE html>
<html>
<body>

<p id="demo"></p>
<p id="demo_case"></p>
<p id="demo_tt"></p>
<p id="demo_tt1"></p>
<p id="demo_tt2"></p>
<p id="demo_tt3"></p>
<p id="demo_tt4"></p>
<p id="demo_tt5"></p>

<script>
function Person(first, last, age, eye) {
    this.firstName = first;
    this.lastName = last;
    this.age = age;
    this.eyeColor = eye;
}

var PersonX = {};
//Creating the object for Person class
var myFather = new Person("John", "Doe", 50, "blue");
var myMother = new Person("Sally", "Rally", 48, "green");

//Adding the method to all the object.
Person.prototype.display = function(){
 return "My father is " + this.firstName+" "+this.lastName+" and age is "+this.age;
}

document.getElementById("demo").innerHTML = myFather.display();
document.getElementById("demo_case").innerHTML = myMother.display();

//Adding a new internal field for the object
Person.prototype.otherName='TT';
document.getElementById("demo_tt").innerHTML = myMother.otherName;

//changing the value for the internal prototype field
Person.prototype.otherName='TT1';
//displaying the changed value
document.getElementById("demo_tt1").innerHTML = myMother.otherName;
//Kind of static variable, object will not get this value
Person.otherName='TTX';
//verifying if the otherName got changed due to above statement
document.getElementById("demo_tt2").innerHTML = myMother.otherName;//no change

//Kind of static variable, object will not get this value
Person.anyotherName='TT3';
//undefined
document.getElementById("demo_tt3").innerHTML = myMother.anyotherName;
//Accessing the static variables
document.getElementById("demo_tt4").innerHTML = Person.anyotherName;
document.getElementById("demo_tt5").innerHTML = Person.otherName;

</script>

</body>
</html>


Question : what is 1 === "1"?
Answer : false.

Question : what is 1 == "1"?
Answer : true.


How you make is AJAX call using jQuery/Angular?
Answer :

How you iterate an array using jquery?
Answer : $.each(arrayName, function(index, data){
    console.log( data);
})

What are the selectors in css?
Answer : id, element, class

What are psudo class in css?
Answer : A pseudo-class is used to define a special state of an element.
For example, it can be used to:
Style an element when a user mouses over it
Style visited and unvisited links differently
Style an element when it gets focus
e.g. hover, focus, empty,

What is float?
http://www.w3schools.com/css/css_float.asp

What is difference between GenericServlet and HttpServlet

What is jsp lifecycle.
Answer : following is the lifecycle method of the jsp
1) translation
2) jspInit
3) jspService
4) jspDestroy

What is DispatcherServlet in Spring mvc?
Answer : central dispatcher for HTTP request handlers/controllers, e.g. for web UI controllers or HTTP-based remote service exporters. Dispatches to registered handlers for processing a web request, providing convenient mapping and exception handling facilities.
This servlet is very flexible: It can be used with just about any workflow, with the installation of the appropriate adapter classes. It offers the following functionality that distinguishes it from other request-driven web MVC frameworks:

  • It is based around a JavaBeans configuration mechanism.
  • It can use any HandlerMapping implementation - pre-built or provided as part of an application to control the routing of requests to handler objects. Default is BeanNameUrlHandlerMapping and DefaultAnnotationHandlerMapping. HandlerMapping objects can be defined as beans in the servlet's application context, implementing the HandlerMapping interface, overriding the default HandlerMapping if present. HandlerMappings can be given any bean name (they are tested by type).
  • It can use any HandlerAdapter; this allows for using any handler interface. Default adapters are HttpRequestHandlerAdapter, SimpleControllerHandlerAdapter, for Spring's HttpRequestHandler and Controller interfaces, respectively. A default AnnotationMethodHandlerAdapter will be registered as well. HandlerAdapter objects can be added as beans in the application context, overriding the default HandlerAdapters. Like HandlerMappings, HandlerAdapters can be given any bean name (they are tested by type).
  • Its view resolution strategy can be specified via a ViewResolver implementation, resolving symbolic view names into View objects. Default is InternalResourceViewResolver. ViewResolver objects can be added as beans in the application context, overriding the default ViewResolver. ViewResolvers can be given any bean name (they are tested by type).
  • If a View or view name is not supplied by the user, then the configured RequestToViewNameTranslator will translate the current request into a view name. The corresponding bean name is "viewNameTranslator"; the default is DefaultRequestToViewNameTranslator.
  • The dispatcher's strategy for resolving multipart requests is determined by a MultipartResolver implementation. Implementations for Apache Commons FileUpload and Servlet 3 are included; the typical choice is CommonsMultipartResolver. The MultipartResolver bean name is "multipartResolver"; default is none.
  • Its locale resolution strategy is determined by a LocaleResolver. Out-of-the-box implementations work via HTTP accept header, cookie, or session. The LocaleResolver bean name is "localeResolver"; default is AcceptHeaderLocaleResolver.
  • Its theme resolution strategy is determined by a ThemeResolver. Implementations for a fixed theme and for cookie and session storage are included. The ThemeResolver bean name is "themeResolver"; default is FixedThemeResolver.

NOTE: The @RequestMapping annotation will only be processed if a corresponding HandlerMapping (for type-level annotations) and/or HandlerAdapter (for method-level annotations) is present in the dispatcher. This is the case by default. However, if you are defining custom HandlerMappings or HandlerAdapters, then you need to make sure that a corresponding custom DefaultAnnotationHandlerMapping and/or AnnotationMethodHandlerAdapter is defined as well - provided that you intend to use @RequestMapping.

A web application can define any number of DispatcherServlets. Each servlet will operate in its own namespace, loading its own application context with mappings, handlers, etc. Only the root application context as loaded by ContextLoaderListener, if any, will be shared.


What is @requestBody?
Answer : Annotation indicating a method parameter should be bound to the body of the web request. The body of the request is passed through an HttpMessageConverter to resolve the method argument depending on the content type of the request. Optionally, automatic validation can be applied by annotating the argument with @Valid.
Supported for annotated handler methods in Servlet environments.

What is Interceptors in spring?
Answer: Spring MVC’s handler interceptor is like a good friend and will help in time of need. Spring’s handler interceptor as rightly named, intercepts a request,

just before the controller or
just after the controller or
just before the response sent to view
Spring’s interceptor can be configured for all the requests (for any URI’s requested) or for a group of URI’s (may be for a set of modules, etc.). Just remember controller and handler are the same. If you are a beginner in Spring, to better understand interceptor, please go through the Spring 3 MVC tutorial.
In real scenario, Spring MVC handler interceptors are used for authentication, logging, to add a common message to all response. For the pages displayed we want to remove all bold tags from the response, it is possible using Spring interceptor.

Important Points about Spring Interceptor:
  • HandlerInterceptor – an interface, which must be implemented by the Spring interceptor classes, has the following three methods.
  • preHandle(…) – called just before the controller
  • postHandle(…) – called immediately after the controller
  • afterCompletion(…) – called just before sending response to view
  • HandlerInterceptorAdaptor – an implementation class of HandlerInterceptor interface provided by Spring as a convenient class. By extending this we can override only the necessary methods out of the three.
Interceptor classes must be declared in spring context xml configuration file within the tag <mvc:interceptors>
Interceptor can be configured to execute in two ways, execute for all requests and map to specific url requests.
ORDER: All global interceptors gets executed first and then the mapped interceptor. Among them, the same order in which the interceptor are declared, the execution is also done.
If true is returned, the execution chain continues and for false, the execution stops for that request with that interceptor.


What are the building blocks of AOP?
Answer : Following are the building blocks of AOP.

  • Aspect: a modularization of a concern that cuts across multiple classes. Transaction management is a good example of a crosscutting concern in enterprise Java applications. In Spring AOP, aspects are implemented using regular classes (the schema-based approach) or regular classes annotated with the @Aspect annotation (the @AspectJ style).
  • Join point: a point during the execution of a program, such as the execution of a method or the handling of an exception. In Spring AOP, a join point always represents a method execution.
  • Advice: action taken by an aspect at a particular join point. Different types of advice include "around," "before" and "after" advice. (Advice types are discussed below.) Many AOP frameworks, including Spring, model an advice as an interceptor, maintaining a chain of interceptors around the join point.
  • Pointcut: a predicate that matches join points. Advice is associated with a pointcut expression and runs at any join point matched by the pointcut (for example, the execution of a method with a certain name). The concept of join points as matched by pointcut expressions is central to AOP, and Spring uses the AspectJ pointcut expression language by default.
  • Introduction: declaring additional methods or fields on behalf of a type. Spring AOP allows you to introduce new interfaces (and a corresponding implementation) to any advised object. For example, you could use an introduction to make a bean implement an IsModified interface, to simplify caching. (An introduction is known as an inter-type declaration in the AspectJ community.)
  • Target object: object being advised by one or more aspects. Also referred to as the advised object. Since Spring AOP is implemented using runtime proxies, this object will always be a proxied object.
  • AOP proxy: an object created by the AOP framework in order to implement the aspect contracts (advise method executions and so on). In the Spring Framework, an AOP proxy will be a JDK dynamic proxy or a CGLIB proxy.
  • Weaving: linking aspects with other application types or objects to create an advised object. This can be done at compile time (using the AspectJ compiler, for example), load time, or at runtime. Spring AOP, like other pure Java AOP frameworks, performs weaving at runtime.


When not to use JPA?

When to use JPA?

What is SOAP?

When you want to use SOAP but not rest?
http://searchsoa.techtarget.com/tip/REST-vs-SOAP-How-to-choose-the-best-Web-service

What is read external and write external.

Question : How to open a file and read the content line by line.
Answer : Below is the example with the try with resource.
public class FileReaderDemo {
       public static void main(String[] args) {
              File file = new File("c:/temp/file.txt");
              try(   FileReader fileReader = new FileReader(file);
                           BufferedReader bufferedReader = new BufferedReader(fileReader);
                ) {
                    
                     String line = null;
                     while((line = bufferedReader.readLine()) != null){
                           System.out.println(line);
                     }
              } catch (Exception e) {
                     e.printStackTrace();
              }            
       }
}

Two different type of file handling.
Answer : Character based and byte based.

Question : What is PrintWriter vs FileWriter?
Answer :
Similarities :
Both extend from Writer.
Both are character representation classes, that means they work with characters and convert them to bytes using default charset.
Differences

FileWriter throws IOException in case of any IO failure, this is a checked exception.
None of the PrintWriter methods throws IOException , instead they set a boolean flag which can be obtained using checkError().
PrintWriter has on optional constructor you may use to enable auto-flushing when specific methods are called. No such option exists in FileWriter.
When writing to files, FileWriter has an optional constructor which allows it to append to the existing file when the "write()" method is called.
Difference between PrintStream and OutputStream: Similar to above explanation, just replace character with byte.

PrintWriter has following methods :

close()
flush()
format()
printf()
print()
println()
write()
and constructors are :

File (as of Java 5)
String (as of Java 5)
OutputStream
Writer
while FileWriter having following methods :

close()
flush()
write()


Question : Is SOAP is protocol?
Answer :

Question : How ClassLoader works in Java?
Answer : As I explained earlier Java ClassLoader works in three principles : delegation, visibility and uniqueness. In this section we will see those rules in detail and understand working of Java ClassLoader with example. By the way here is a diagram which explains How ClassLoader load class in Java using delegation.

Delegation principles
As discussed on when a class is loaded and initialized in Java, a class is loaded in Java, when its needed. Suppose you have an application specific class called Abc.class, first request of loading this class will come to Application ClassLoader which will delegate to its parent Extension ClassLoader which further delegates to Primordial or Bootstrap class loader. Primordial will look for that class in rt.jar and since that class is not there, request comes to Extension class loader which looks on jre/lib/ext directory and tries to locate this class there, if class is found there than Extension class loader will load that class and Application class loader will never load that class but if its not loaded by extension class-loader than Application class loader loads it from Classpath in Java. Remember Classpath is used to load class files while PATH is used to locate executable like javac or java command.

Visibility Principle
According to visibility principle, Child ClassLoader can see class loaded by Parent ClassLoader but vice-versa is not true. Which mean if class Abc is loaded by Application class loader than trying to load class ABC explicitly using extension ClassLoader will throw either java.lang.ClassNotFoundException. as shown in below Example

Uniqueness Principle
According to this principle a class loaded by Parent should not be loaded by Child ClassLoader again. Though its completely possible to write class loader which violates Delegation and Uniqueness principles and loads class by itself, its not something which is beneficial. You should follow all  class loader principle while writing your own ClassLoader.


Question : what is data contained by request and response.
Answer :

Question : Restful services what are the different ways of sending the data from the UI.
Answer :

Question : Why POST and delete are non idempotent?
Answer :

Question : Represent the CRUD operation in terms of the Restful web services.
Answer :

Question : example ManyTo Many Relationship.
Answer :

Question : What are the diffrent http status.
Answer :

Question : Equals and Hashcode.
Answer :

Question : Optimizing technique in the hibernate.
Answer:


What are different type of inner classes.
Answer : Static inner class and non static inner class.
public class InnerClassContainer {
       public static class StaticInnerClass {}

       public class NonStaticInnerClass {}
}

public class InnerClassDemo {
public static void main(String[] args) {
      StaticInnerClass staticInnerClass = new StaticInnerClass();
      InnerClassContainer innerClassContainer = new InnerClassContainer();
      NonStaticInnerClass nonStaticInnerClass
                            innerClassContainer.new NonStaticInnerClass();
      }
}

Question  : How to reverse an array?
Answer :
public class ReverseArraylist {
       public static void main(String[] args) {
              List<String> strings = new ArrayList<>();
              strings.add("vikash");
              strings.add("chandra");
              strings.add("Mishra");
              for (int i = 0; i < strings.size() / 2; i++) {
                     String temp = strings.get(i);
                     strings.set(i, strings.get(strings.size() - i - 1));
                     strings.set(strings.size() - i - 1, temp);
              }

              for (String s : strings) {
                     System.out.println(s);
              }
}

Question : Write a code for Fibonacci series.
Answer
public class FibonacciSeries {
       public static void main(String[] args) {
              int t1 = 0;
              int t2 = 1;
              for(int i=1 ; i < 10 ; i++){
                     System.out.print(t1+"      ");
                     t2 = t2 + t1;
                     t1 = t2-t1;
              }
       }
}

Question : Write a code for Fibonacci with the recursion.
Answer
public static void main(String[] args) {
       fibrecursive(0, 1, 0);
}
      
static void fibrecursive(int t1, int t2, int nbr){
       if(nbr <= 10){
              System.out.println(t1);
              fibrecursive(t2, t2+t1, nbr+1);
       }else{
              System.out.println(t1);
       }
}

Question : Write a code using recursion to get the sum of integer array?
Answer
public static void main(String[] args) {
              int intArr1[] = new int[]{1,2,3,4,5};
              int intArr2[] = new int[]{};
              int intArr3[] = new int[]{1};
              int intArr4[] = null;
              System.out.println(summer(intArr1,0));
              System.out.println(summer(intArr2,0));
              System.out.println(summer(intArr3,0));
              System.out.println(summer(intArr4,0));
       }
      
       /**
        * null and no element check
        * @param intArr
        * @param pos
        * @return
        */
       public static int summer(int[] intArr, int pos){
              if(intArr == null || intArr.length == 0  ){
                     return 0;
              }else if(pos == intArr.length-1 ){
                     return intArr[pos];
              }else{
                     return intArr[pos] + summer(intArr, pos+1);
              }
}

What is ACID?
Answer : The characteristics of these four properties as defined by Reuter and Härder:

AtomicityAtomicity requires that each transaction be "all or nothing": if one part of the transaction fails, then the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes. To the outside world, a committed transaction appears (by its effects on the database) to be indivisible ("atomic"), and an aborted transaction does not happen.

Consistency The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraints, cascades, triggers, and any combination thereof. This does not guarantee correctness of the transaction in all ways the application programmer might have wanted (that is the responsibility of application-level code) but merely that any programming errors cannot result in the violation of any defined rules.

Isolation :  The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed serially, i.e., one after the other. Providing isolation is the main goal of concurrency control. Depending on the concurrency control method (i.e., if it uses strict - as opposed to relaxed - serializability), the effects of an incomplete transaction might not even be visible to another transaction.

Durability The durability property ensures that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.


What are the Isolation level.
Answer : Of the four ACID properties in a DBMS (Database Management System), the isolation property is the one most often relaxed. When attempting to maintain the highest level of isolation, a DBMS usually acquires locks on data or implements multi version concurrency control, which may result in a loss of concurrency. This requires adding logic for the application to function correctly.

Most DBMSs offer a number of transaction isolation levels, which control the degree of locking that occurs when selecting data. For many database applications, the majority of database transactions can be constructed to avoid requiring high isolation levels (e.g. SERIALIZABLE level), thus reducing the locking overhead for the system. The programmer must carefully analyze database access code to ensure that any relaxation of isolation does not cause software bugs that are difficult to find. Conversely, if higher isolation levels are used, the possibility of deadlock is increased, which also requires careful analysis and programming techniques to avoid.

The isolation levels defined by the ANSI/ISO SQL standard are listed as follows.

Serializable : This is the highest isolation level. With a lock-based concurrency control DBMS implementation, serializability requires read and write locks (acquired on selected data) to be released at the end of the transaction. Also range-locks must be acquired when a SELECT query uses a ranged WHERE clause, especially to avoid the phantom reads phenomenon (see below).

When using non-lock based concurrency control, no locks are acquired; however, if the system detects a write collision among several concurrent transactions, only one of them is allowed to commit. See snapshot isolation for more details on this topic.

Repeatable reads :  In this isolation level, a lock-based concurrency control DBMS implementation keeps read and write locks (acquired on selected data) until the end of the transaction. However, range-locks are not managed, so phantom reads can occur.

Read committed : In this isolation level, a lock-based concurrency control DBMS implementation keeps write locks (acquired on selected data) until the end of the transaction, but read locks are released as soon as the SELECT operation is performed (so the non-repeatable reads phenomenon can occur in this isolation level, as discussed below). As in the previous level, range-locks are not managed.

Putting it in simpler words, read committed is an isolation level that guarantees that any data read is committed at the moment it is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. It makes no promise whatsoever that if the transaction re-issues the read, it will find the same data; data is free to change after it is read.

Read uncommitted : This is the lowest isolation level. In this level, dirty reads are allowed, so one transaction may see not-yet-committed changes made by other transactions.

Since each isolation level is stronger than those below, in that no higher isolation level allows an action forbidden by a lower one, the standard permits a DBMS to run a transaction at an isolation level stronger than that requested (e.g., a "Read committed" transaction may actually be performed at a "Repeatable read" isolation level).

Question : What is media query?
Answer :

Question : Difference between get and load?
Answer : get() loads the data as soon as it’s called whereas load() returns a proxy object and loads data only when it’s actually required, so load() is better because it support lazy loading.
Since load() throws exception when data is not found, we should use it only when we know data exists.
We should use get() when we want to make sure data exists in the database.

Question : What is new change in the String class for the substring?
Answer :  String is supported by a char array. In JDK 6, the String class contains 3 fields: char value[], int offset, int count. They are used to store real character array, the first index of the array, the number of characters in the String.

When the substring() method is called, it creates a new string, but the string's value still points to the same array in the heap. The difference between the two Strings is their count and offset values. So that will stop GC to apply its operation as the other string object even if they are utilizing the portion of the char array will not allow to get it collected by garbage collector.
Addition ot that if you have a VERY long string, but you only need a small part each time by using substring(). This will cause a performance problem since you need only a small part, you keep the whole thing.

This is improved in JDK 7. In JDK 7, the substring() method actually create a new array in the heap, so GC can take away the char array which is no longer used and also improve the performance as getting smaller char[] rather than extracting the small chunk from the bigger char array.

Question :  What are S.O.L.I.D Principles of Object Oriented Design?
Answer :  S.O.L.I.D is an acronym for the first five object-oriented design(OOD) principles by Robert C. Martin, popularly known as Uncle Bob.
These principles, when combined together, make it easy for a programmer to develop software that are easy to maintain and extend. They also make it easy for developers to avoid code smells, easily refactor code, and are also a part of the agile or adaptive software development.

S.O.L.I.D STANDS FOR: When expanded the acronyms might seem complicated, but they are pretty simple to grasp.
  • S – Single-responsiblity principle
  • O – Open-closed principle
  • L – Liskov substitution principle
  • I – Interface segregation principle
  • D – Dependency Inversion Principle
  • Single-responsibility Principle : S.R.P for short – this principle states that a class should have one and only one reason to change, meaning that a class should have only one job.
  • Open-closed Principle : Objects or entities should be open for extension, but closed for modification.
  • Liskov substitution principle : Let q(x) be a property provable about objects of x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T. All this is stating is that every subclass/derived class should be substitutable for their base/parent class.
  • Interface segregation principle :  A client should never be forced to implement an interface that it doesn’t use or clients shouldn’t be forced to depend on methods they do not use.
  • Dependency Inversion principle : The last, but definitely not the least states that, Entities must depend on abstractions not on concretions. It states that the high level module must not depend on the low level module, but they should depend on abstractions.
https://scotch.io/bar-talk/s-o-l-i-d-the-first-five-principles-of-object-oriented-design.

Question : How to make the element at the middle of the screen.
Answer :

Question : What is threadLocal

Question : How to find the deadlock.

Question : How to get the nTh element from the end of the LinkedList.

Question : How to get the sum of the combination of number e.g. 3 should give output as (1,1,1), (2,1),(1,2)

Question : put vs post?

Question : tools used to test the rest ful webservices.

Question : how to call stored procedure using hibernate.

Question : double vs float.

Question : java vs enterprise java.

Question : What is first level cache and provide some details.
Answer :
1) First level cache is associated with “session” object and other session objects in application can not see it.
2) The scope of cache objects is of session. Once session is closed, cached objects are gone forever.
First level cache is enabled by default and you can not disable it.
3) When we query an entity first time, it is retrieved from database and stored in first level cache associated with hibernate session.
4) If we query same object again with same session object, it will be loaded from cache and no sql query will be executed.
5) The loaded entity can be removed from session using evict() method. The next loading of this entity will again make a database call if it has been removed using evict() method.
6) The whole session cache can be removed using clear() method. It will remove all the entities stored in cache.


Question : How to decide when to use SOAP and when to user REST?

Question : Define hibernate flush.
Answer : Flushing is the process of synchronizing the underlying persistent store with persistable state held in memory.

Question : What is the use case of flush?
Answer : One common case for explicitly flushing is when you create a new persistent entity and you want it to have an artificial primary key generated and assigned to it, so that you can use it later on in the same transaction. In that case calling flush would result in your entity being given an id.
More...
In default configuration Hibernate tries to sync up with the database at three locations.

1. before querying data
2. on commiting a transaction
3. explictly calling flush
If the FlushMode is set as FlushMode.Manual, the programmer is informing hibernate that he/she will handle when to pass the data to the database.Under this configuration the session.flush() call will save the object instances to the database.
A session.clear() call acutally can be used to clear the persistance context.




Comments

Popular posts from this blog

NodeJS

Question : Why You should use Node JS? Answer :  Following are the major factor influencing the use of the NodeJS Popularity : The popularity can be important factor, as it has more user base and hence solution  of any common problem faced by developer can found easily online, without any professional help. JavaScript at all levels of the stack :  A common language for frontend and backend offers several potential benefits: The same programming staff can work on both ends of the wire Code can be migrated between server and client more easily Common data formats (JSON) exist between server and client Common software tools exist for server and client Common testing or quality reporting tools for server and client When writing web applications, view templates can be used on both sides Leveraging Google's investment in V8 Engine. Leaner, asynchronous, event-driven model Microservice architecture Question : example of node JS code? Answer :  const fs = require('fs'); const uti

Kubernetes

What is Kubernetes? Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem.  The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community. Why you need Kubernetes and what it can do? Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system? That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resi

Spring Interview Question - Version 3.5

Spring Overview Question :   What is Spring? Answer : Spring is an open source development framework for Enterprise Java. The core features of the Spring Framework can be used in developing any Java application, but there are extensions for building web applications on top of the Java EE platform. Spring framework targets to make Java EE development easier to use and promote good programming practice by enabling a POJO based programming model.   Question : What are benefits of Spring Framework? Answer :   Lightweight : Spring is lightweight when it comes to size and transparency. The basic version of spring framework is around 2MB.   Inversion of control (IOC) : Loose coupling is achieved in Spring, with the Inversion of Control technique. The objects give their dependencies instead of creating or looking for dependent objects.   Aspect oriented (AOP) : Spring supports Aspect oriented programming and separates application business logic from system services.   C