The task of maintaining peak storage performance is critical for the modern storage administrator. More and more mission-critical applications rely on the shared storage infrastructure for real-time activities. Condusiv Technologies Storage Performance Solutions The fact is, these new storage technologies have absolutely improved storage performance, flexibility and manageability, but they have not solved the issues generated at the file system level, such as the tidal wave of unnecessary I/O caused by the Windows OS splitting files apart upon write.

This surplus of unnecessary I/O is a phenomenon that affects storage space, causing it to be used inefficiently thereby reducing storage capacity and performance. Excessive I/O causes bottlenecks, which result in a significant increase in latency, application freezes, system hangs, long disk queue lengths, and more problems.

How do I increase my storage performance?

As storage solutions and infrastructures have evolved, so have solutions to address performance. “Prevention is better than the cure” is a popular and very IT applicable maxim. That philosophy, applied to preventing unnecessary read and write I/O with V-locity’s IntelliWrite® and IntelliMemory I/O optimization technology, is the ultimate solution to maximizing performance of modern storage solutions.

V-locity® uniquely addresses the unnecessary I/O patterns of reads and writes by aggregating data on writes to perform sequentially while caching reads on available server memory without contention to the application. As a result, V-locity boosts application performance by 50% or more by (1) maximizing the efficiency of every single I/O from write requests that leave the server and (2) caching the most active data from read requests using available server memory.


In the storage arena, thin provisioning is a fairly hot topic. Planning ahead for growth using traditional storage provisioning, system admins typically provide themselves with more storage space than is actually needed. This results in substantial inefficiencies as space is allocated, but often not used.

Condusiv Technologies Thin Provisioned Storage Solutions

Since this allocated but unused space cannot be used by other applications, many businesses face the need to buy more storage space. As this need for more space grows, so does the cost.

The problem of wasted space, and the cost associated with it, is eliminated by moving to thin provisioned storage. Thin provisioning is essentially the act of using virtualization technology to give the appearance of more physical resources than are actually available.

However, the principle of thin provisioning suffers from some unique drawbacks at both the computational and storage levels.

More data, more storage requirements, more data retrieval and longer retention of data have made the speed and efficiency of regular backups a vital component of corporate IT success.

Data Backup And yet, the time required for file-based data backup continues to increase while the time allotted for it remains constant. This is a critical issue and becomes more important when it is understood that despite advances in backup technology, backup speeds continue to slow as a direct result of increasingly non-optimized data on the disk.

Data backup involves file access that creates millions of I/Os to reads and write. Each I/O represents a unit of time and a degree of latency. Backup times creep up and quickly exceed their windows. Solutions to handle this, including more hardware, will never be effective if the underlying cause is not eliminated or prevented in the first place.


CPU and memory performance has increased geometrically while storage performance has lagged behind. RAID arrays have improved to help reduce this gap but can suffer from slow performance due to excessive I/O. Slow RAID array performance can bottleneck a whole system slowing your applications.

Evenly Striped RAID Array The Windows OS inherently splits files apart and scatters them around a volume instead of writing them in one place contiguously. When this happens a large number of unnecessary requests are sent to the RAID controller.

This is where V-locity I/O optimization software comes in—by proactively preventing unnecessary I/O at the Windows OS level. Preventing unnecessary I/O not only prevents the degradation in VM and application performance, but improves RAID array performance, keeping it running at peak speeds.

As businesses grow the demands for network storage increase, they follow a natural progression of storage environments. As direct-attached storage (DAS) on their Windows servers becomes insufficient, businesses frequently move on to network attached storage (NAS) or storage area networking (SAN).

Condusiv Technologies SAN & NAS Solutions The full promise of storage not directly-attached to the server is best fulfilled by implementing a SAN, which has advantages in performance, reliability, availability, and provisioning.

But the overwhelming benefit of SAN storage often gives storage administrators the false impression that by simply implementing a SAN and following the vendor’s instructions, they will achieve the best possible performance and reliability in their Windows Server-based network.

As IT professionals seek to overcome performance bottlenecks at the storage level in pursuit of faster boot times, application load times etc., some are turning to flash memory, otherwise known as solid state drives (SSD) to get higher performance out of their servers.

SSD Server Performance Solid state drives are generally considered to be faster, more powerful, more efficient and in some respects more reliable than hard drives.

High end SSDs have proven to yield some very impressive read times, well over double a typical SATA hard disk drive.

The problem is that solid state drives start out really fast and then quickly start losing their speed.

Video: Why Fragmentation is Still a Problem with SSDs »

The popular theory is that because there are no moving parts in an SSD, file fragmentation is not an issue. The truth is SSDs experience write speed degradation due to fragmented file write activity and free space fragmentation. HyperFast® solid state drive optimizer, now included standard in Diskeeper®, keeps your system running as fast as when you purchased it, by optimizing the free space on your SSD.

are fragmentation and regeneration the same
are fragmentation rounds legal
why are fragmentation and joining important in editing
how are fragmentation and budding similar
what are fragmentation patterns
what are fragmentation bombs
what are fragmentation attack
dna fragmentation are
can defragmentation cause blue screen
fragmentation can be isolated as well as
fragmentation can affect the query performance
can embryo fragmentation improve
how can fragmentation harm a system’s performance
can dna fragmentation be improved
can dna fragmentation be treated
how can fragmentation be minimized
how can defragmentation be fixed
how can defragmentation be countered
fragmentation do
which do fragmentation and repetition create in a motive
how do fragmentation grenades work
how do fragmentation and edges affect habitats
do disk fragmentation
what do fragmentation means
do not fragmentation
how do fragmentation and fission differ
do disk fragmentation windows 10
how to do fragmentation in sql
does fragmentation affect performance
does fragmentation matter on ssd
does fragmentation affect ssd
does fragmentation happen within a ssd
does fragmentation mean
does fragmentation work
what does fragmentation in a mass spectrometer mean
why does fragmentation slow down the computer
when does fragmentation occur
how does fragmentation affect computer performance
how fragmentation works
how fragmentation can be overcome
how fragmentation occurs
how fragmentation works in ip
how fragmentation is measured
how fragmentation is done in ipv4
how fragmentation affects a computer’s performance
how fragmentation occurs in sql server
how fragmentation is handled in ipv4 and ipv6
how fragmentation takes place in spirogyra
is fragmentation asexual reproduction
is fragmentation needed in concatenated virtual-circuit
is fragmentation the same as regeneration
is fragmentation and regeneration the same thing
is fragmentation a musical parameter
is fragmentation bad
why is defragmentation good
is fragmentation a problem with ssd
is fragmentation budding
fragmentation should be
percent fragmentation should defrag
was ist fragmentation
was bringt fragmentation
what fragmentation in operating system
what fragmentation is acceptable
what fragmentation means
what fragmentation do
what fragmentation biology
fragmentation what happens
fragmentation what does it do
what is fragmentation in sql server
what is fragmentation in networking
what is fragmentation in oracle
when does fragmentation occur quizlet
when does fragmentation happen
when does external fragmentation occur
when does ip fragmentation occur
when does internal fragmentation occur
index fragmentation when to rebuild
when does disk fragmentation occur
when does packet fragmentation occur
where does fragmentation occur
where is fragmentation done in ipv6 network
where does fragmentation occur in mass spectrometry
where is fragmentation done in ipv6
where does fragmentation happen
where is disk fragmentation in windows 7
where is habitat fragmentation happening
where does ip fragmentation occur
where does habitat fragmentation occur
where is system fragmentation
which fragmentation occurs in paging system
which fragmentation occurs in dynamic memory allocation
which fragmentation occurs in paging
which fragmentation is reduced by compaction
fragmentation which layer
which uses fragmentation mode of reproduction
which type of fragmentation is reduced by compaction
atm requires which fragmentation
which header contains fragmentation information
which type of fragmentation occurs with paging
who discovered fragmentation
who uses fragmentation
who uses fragmentation mode of reproduction
why fragmentation is required
why fragmentation is required in networking
why fragmentation of packet is required
why fragmentation is required in distributed databases
why fragmentation offset is multiple of 8
why fragmentation is required in ipv4
why fragmentation occurs in sql server
why fragmentation is important
why fragmentation in distributed database
why fragmentation is needed
how will fragmentation affect hydrogen chloride
how will fragmentation affect the mass spectrum of hcl
will a fragmentation vest stop a bullet
how will fragmentation affect the mass
external fragmentation will not occur when

Key Benifacts

V-locity’s IntelliWrite® technology optimizes I/O write operations, eliminating the performance penalties associated with the Windows OS, splitting files into pieces and writing each piece to a different logical location within the SAN or NAS. V-locity is aware of space allocation and aggregates writes to behave sequentially—requiring less I/O for every file written. Subsequent reads also benefit, since only minimum I/O is required to fulfill the request. As a result, only productive I/O requests are processed across the entire infrastructure, enabling more data to be processed in the same amount of time.

When only productive I/O traffic is processed through the server, network and storage, I/Os per second are accelerated, latency is greatly reduced, and more work can be performed in the same amount of time. This greatly improves the efficiency of all VMware ESX/ESXi and Microsoft Hyper-V virtual platforms and physical servers for increased bandwidth.

V-locity I/O optimization is tailored for virtual environments that leverage a SAN or NAS as it proactively provides I/O benefit—improving storage performance and benefiting advanced storage features like snapshots, replication, data deduplication and thin provisioning. Since V-locity optimizes I/O at the server level, it is complementary to all SAN and NAS storage systems and media types (e.g., HDD or SSD).

Some storage arrays include a feature permitting thin provisioning for their LUNs (logical unit number). This thin provisioning storage layer occurs below the virtual platform storage stack, and essentially means scalable datastores.

Thin provisioning at the datastore level has been the source of some concern for storage administrators with regards to recovery from over-provisioning. When virtual disks are deleted or copied away from a datastore, the array itself is not led to understand that those storage blocks are now free. You can see how this can lead to needless storage consumption.

vSphere 5 from VMware introduced a solution for this issue. The new vSphere Storage APIs for Array Integration (VAAI) for thin provisioning uses the SCSI UNMAP command to tell the storage array that space previously occupied by a VM can be reclaimed. This addresses one aspect of the issue with thin virtual machine growth.

Thin Provisioning is a method for optimizing utilization of available storage in a shared storage environment. It is a flexible manner to allocate space to systems, on a just-enough and just-in-time basis. It is a technique that applies to SANs, as well as virtual systems. One disadvantage in this technology is that deleted content is simply marked unused at the file system layer rather than zeroed out, causing a wasted space.

With V-locity® I/O optimization software, we introduced a new Automatic Space Reclamation engine. This engine automatically zeroes out the no longer used free space within thin virtual disks, without taking them offline and with no impact on resource usage.

So what does this mean? Reclaiming the deleted makes virtual disk compaction easy. The thin virtual disks themselves are kept slimmed down within datastores, giving more control back to the storage admins governing provisioning.

The end result is that less storage is required, so instead of purchasing additional storage, your company can better utilize the storage they already have – saving you money.

The effects of excessive I/O in SAN storage often manifest in reduced application performance and inefficient use of storage. Application response times begin to degrade, the time necessary to load large files and applications grows longer, and the overall user experience is negatively impacted.

End-users begin to feel that their computer is slowing down, leading to help desk calls with complaints about network performance or other problems. The reality is that the massive amount of unnecessary I/O is causing data manipulation times to increase to the point where the delay becomes perceptible to the end-user.

V-locity® I/O optimization software uniquely addresses the unnecessary I/O patterns of reads and writes by aggregating data on writes to perform sequentially while caching reads on available server memory without contention to the application. As a result, V-locity boosts application performance by 50% or more by (1) maximizing the efficiency of every single I/O from write requests that leave the server and (2) caching the most active data from read requests using available server memory.

Much of the I/O traffic moving through the infrastructure shares similar data, yet every byte of that data travels the entire distance from server to storage and back for every I/O request, even when it is frequently accessed data. IntelliMemory technology addresses this read request inefficiency by caching the most active data in the server’s available memory. Should an application require more memory, V-locity serves memory back to the application to ensure there is never a case of memory starvation or resource contention. Condusiv Technologies’ self-learning caching algorithms have set the gold standard for the industry as they are OEM’d by eight of the top ten largest PC manufacturers in the world.

A file system generates a lot of unnecessary I/Os, however, the RAID controller is unaware that the multiple I/Os are part of the same file and treats each I/O as a separate entity. This leads to RAID array performance degradation. Preventing excessive and unnecessary I/O at the file system level and consolidating data into a single I/O improves the performance of write and read activities.

At some point a SAN administrator will realize that the SAN storage is no longer performing as well as it once did. Investigating the cause of the performance slowdown usually indicates problems with free space or available storage.

The administrator may wonder what is causing all the unexpected I/O, and increasing the amount of storage available is usually the easiest solution. However, in many cases adding storage is unnecessary because storage on the SAN is not the problem.

One of the most significant issues, and the most unrecognized, is the excessive amount of unnecessary I/O in the SAN environment. With the implementation of a SAN, many Windows Server administrators believe that excessive I/O, which they accepted and dealt with when using DASD storage, has gone away.

Storage admins often overlook excessive I/O when evaluating the problem of reduced SAN efficiency. When SAN performance problems are noticed, the knee jerk reaction is to add additional storage, with the presumption that the lack of free space is causing the SAN to slow down. However, both overall performance and storage efficiency can be affected by the tidal wave of unnecessary I/O that occurs when Windows Server writes data out to storage, regardless of whether that storage is DASD, NAS, or SAN.

When it comes to read performance, SSDs are by far faster than your standard hard drive. One of the main reasons for such fast read times is the lack of “seek time” that an SSD has to perform to find and retrieve a piece of data versus a hard drive. Simply put, a hard drive has to move a magnetic head connected to an arm over a track on a platter and then through various means to find the data requested and then read or write something.

On the other hand an SSD sends an electrical pulse to read the data which is much faster in comparison. The lack of a moving part cuts the time down considerably.

Writing data to an SSD is a whole other story. Small free spaces are scattered throughout an SSD volume at the logical level and cause the file system to write a file in fragments to those small free spaces. By doing so, write performance is degraded by as much as 80% to that solid state storage device. Diskeeper’s IntelliWrite® technology, prevents fragmentation from happening in the first place. This causes contiguous or sequential writes to occur which are much faster and efficient than fragmented or random writes. This technology combined with HyperFast ensures optimal performance.

Furthermore, SSDs have a finite number of writes that they can perform over their lifetime. One of the downfalls of SSDs is that they require that old data be erased before new data is written over it, rather than just writing over the old information like with hard drives. Due to the doubling effect of needing to read and erase before it can write again, SSDs undergo twice as much use. This doubles the wear and tear and can cause major issues, including shortened lifespan, and slower random write performance.

As the SSD approaches its limit, more fragmentation and write errors occur, causing the SSD to slow. Write performance decreases proportionately as free space fragmentation increases. All SSDs will suffer from this problem at one point or another unless Diskeeper is used to optimize the solid state drive.

Level 21, Al Habtoor Business Tower
Dubai Marina
United Arab Emirates



1101, Mindspace
Airoli, Navi Mumbai



Copyright © 2018 Softgate Solutions LLC. All Rights Reserved. Legal Notices | Privacy & Cookies Policy