Tuesday, December 11, 2007
He was facing a problem where his function would Round off 544.435 to 544.43 but would Round off 544.445 to 544.45 which was the intended one.
After a fair bit of research I could conclude the following:
The error was because he was taking input and providing the output as DOUBLE.
Now MSDN says that DOUBLE is a FLOATING POINT VARIABLE Type. It takes up less space in the memory but is prone to some Rounding problems.
So while storing the above two DOUBLES, .NET stored them as follows:
544.435 ------------- 544.43499999999994543031789362430572509765625
544.445 ------------- 544.4450000000000500222085975110530853271484375
So doing a Multiplication with the precision factor (100 in this case) and taking a Math.Floor of that number converts
544.435 to 544.43
544.445 to 544.45
More explanation about the Binary Floating point could be found at
BTW, the solution to the above mentioned problem is using something that is Fixed Point Variable and not a Floating Point Variable like Double.
So using DECIMAL instead of DOUBLE in the application resolves the Rounding Issue.
Thursday, November 22, 2007
Wednesday, October 17, 2007
This is where it all started. I faced an interesting problem while working on my Sessions application. Declaring a structure as :
Struct ShortIntIf someone is asked about the size of this structure. I bet most people would answer it as I did : 6 bytes.
But surprisingly, the answer is 8 bytes.
Simple reason for that being the memory is allocated in the chunks and if the declaration of structure is done wisely, it could save a lot of memory:
Struct exampleStructHere if we look at this structure, the memory for this block is allocated as follows:
Byte b1 ----- A chunk of 4 bytes is allocated
Int32 i ------ This chunk has only 3 bytes left so a new chunk of 4 bytes alloted
short s ------ Previous chunk is full. So next 4 byte chunk is alloted.
Byte b2 ----- Previous chunk has 2 bytes left and can accomodate a byte. Hence the byte variable is accomodated in teh same chunk.
So total size becomes 12 bytes.
Now if we redefine our structure as follows:
StructNow the memory allocation would be as follows:
Byte b1 ----- A chunk of 4 bytes is alloted
Byte b2 ----- A byte in teh same chunk
short s ------ Previous chunk has 2 bytes left in it
Int32 i ----- Since the previous chunk is full, a new chunk is alloted
Hense the total size is 8 bytes only.
More could be found on the following links :
Wednesday, August 22, 2007
Represents text that should be kept confidential. The text is encrypted for privacy when being used, and deleted from computer memory when no longer needed. This class cannot be inherited.
Storing any sensitive data like passwords etc in the standard System.String can be a potential threat to the data for the following reasons:
>> It is stored on the Managed Heap and is not pinned in the memory, so the garbage collector can move it around at will leaving several copies in memory. The code will not know that this has happened, and even if it could figure out that the string was moved, there is no way to clear out the other copies. Instead we have to wait for the CLR to allocate another object where the sensitive data is so that the memory gets erased.
>> It's not encrypted, so anyone who can read process's memory will be able to see the value of the string easily. Also, if the process gets swapped out to disk, the unencrypted contents of the string will be written to the swap file.
>> It's not mutable, so whenever it is modified, there will be the old version and the new version both in memory
>> Since it's not mutable, there's no effective way to clear it out when you're done using it
Hence, .NET 2.0 introduced a new class under System.Security namespace called SecureString, that can be used in place of standard Strings to store sensitive values.
Using SecureString eliminates the above mentioned issues as:
>> The SecureString is not stored in the managed heap while standard strings are and therefore it will not be replicated to multiple locations in memory.
>> SecureStrings are stored in an encrypted form. They need to be decrypted when they are used. this period of decryption can be kept as small as possible. So even if the process is swapped out to disk while the string is encrypted, the plaintext will not end up in the swap file.
>> The keys used to encrypt the string are tied to the user, logon session, and process. This means that any minidumps taken of the process will contain secure strings which are not decryptable.
>> SecureStrings are securely zeroed out when they're disposed of. System.Strings are immutable and cannot be cleared when you've finished with the sensitive data
create a SecureString, you append one character at a time:
System.Security.SecureString secString = new System.Security.SecureString();
When the string contains the data you want, you can make it immutable and uncopyable by calling the MakeReadOnly method:
To read the secure value, use the SecureStringToBSTR() method as follows:
IntPtr ptr = System.Runtime.InteropServices.Marshal.SecureStringToBSTR(secString);
string sDecrypString = System.Runtime.InteropServices.Marshal.PtrToStringUni(ptr);
The garbage collector will remove SecureStrings when they're no longer referenced, but you
can dispose of a SecureString by using the Dispose() method:
Thursday, August 16, 2007
The best part of it was creating an ATOM feed reader for my blog. I finally succeeded in creating one using the idea from Dynamic Drive.
Here I could create a ATOM Feed scroller which would show all the posts on the blog and also give a pause at each and every post with a link to the original post on my blog.
A sample could be seen on THIS SITE where I have hosted it (This is a trial and hense would only be available to me till September 12 2007) as well as on the top of this blog.
By that time I would be looking to modify it so that it just required the client side code and no server side coding is involved.
Currently it uses an aspx page to display the posts as there is a bit of server side code involved in it. I would try to eliminate that ASAP.
Once done, I would make this a portable widget that could be used to display any ATOM feed providing its URL.
Friday, August 10, 2007
It required XML reading and playing around with the nodes. Strange enough, looked easier to work, I had a hard time getting the node required to display the things out.
Below is the format that an ATOM xml uses:
<title type='text'>Ashutosh Vyas's
Page Concept in ASP.NET</title>
Now all I needed was to find out the root node and traverse to the Node "feed/title" to find out the title of the blog to display on the top of the scroller.
To my knowledge, it was as easy as
But that did not happen to be the case. It always returned me null.
I tried grabbing out the root node (feed) using
but this would again return me the same NULL.
Strange for me, doing a rssDoc.DocumentElement() would most certainly return me the required feed node.
After a bit of help from MSDN and other group, I discovered what I did not knew till now and I suspect many ppl do not because of lack of use.
You require a NAMESPACEMANAGER to get those nodes out.
So to dig out something from
we need the following code.
XmlNode feedNode = rssDoc.DocumentElement;
XmlNamespaceManager nsMgr = new XmlNamespaceManager(rssDoc.NameTable);
String feedTitle = feedNode.SelectSingleNode("prefix:title",nsMgr).InnerText;
Thursday, August 2, 2007
This is the error most of us have faced without a clue asto what leads to this error and server being unavailable.Heres the reason:
ASP.NET uses threads from a common language runtime (CLR) thread pool to process requests. As long as there are threads available in the thread pool, ASP.NET has no trouble dispatching incoming requests. But once the thread pool becomes saturated, i.e. all the threads inside it are busy processing requests and no free threads remain, new requests have to wait for threads to become free. If the logjam becomes severe enough and the queue fills to capacity, ASP.NET throws this error stating that Server is Unavailable.
SO whats the solution: Well the easiest way is to increase the maximum size of the thread pool, allowing more threads to be created. That's the course developers often take when repeated "Server unavailable" errors are reported. Another common course of action is adding more servers to the Web farm. But increasing the thread count-or the server count-doesn't solve the issue. It just provides temporary relief to the problem.
One solution to this implemented in ASP.NET 2.0 is the use of ASYNCHRONOUS PAGES.
When ASP.NET receives a request for a page, it grabs a thread from a thread pool and assigns that request to the thread. A normal, or synchronous, page holds onto the thread for the duration of the request, preventing the thread from being used to process other requests. If a synchronous request becomes I/O bound—for example, if it calls out to a remote Web service or queries a remote database and waits for the call to come back—then the thread assigned to the request is stuck doing nothing until the call returns. That impedes scalability because the thread pool has a finite number of threads available. If all request-processing threads are blocked waiting for I/O operations to complete, additional requests get queued up waiting for threads to be free. At best, throughput decreases because requests wait longer to be processed. At worst, the queue fills up and ASP.NET fails subsequent requests with 503 "Server Unavailable" errors.
Asynchronous pages offer a neat solution to the problems caused by I/O-bound requests. Page processing begins on a thread-pool thread, but that thread is returned to the thread pool once an asynchronous I/O operation begins in response to a signal from ASP.NET. When the operation completes, ASP.NET grabs another thread from the thread pool and finishes processing the request. Scalability increases because thread-pool threads are used more efficiently. Threads that would otherwise be stuck waiting for I/O to complete can now be used to service other requests. The direct beneficiaries are requests that don't perform lengthy I/O operations and can therefore get in and out of the pipeline quickly. Long waits to get into the pipeline have a disproportionately negative impact on the performance of such requests
The concept of Asynchronous Pages is available only in ASP.NET 2.0 but it could be implemented in ASP.NET 1.x in a way outlined in the below mentioned link.
The trick here is to implement IHttpAsyncHandler in a page's codebehind class, prompting ASP.NET to process requests not by calling the page's IHttpHandler.ProcessRequest method, but by calling IHttpAsyncHandler.BeginProcessRequest instead.
ASP.NET 2.0 vastly simplifies the way you build asynchronous pages. You begin by including an Async="true" attribute in the page's @ Page directive, like so:
<%@ Page Async="true" ... %>
This property set to true, says the page to implement the IHttpAsyncHandler. Regarding this, you need to register the Begin method and End method of to the Page.AddOnPreRenderCompleteAsync.
// Register async methods
By these actions, the starts its normal life cycle, until the end of the OnPreRender event invocation. At this point the ASP.NET calls the Begin method that we registered earlier and the operation begins (calling the database etc...), meanwhile, the thread that has been assigned to the request goeas back to the thread pool. At the end of the Begin method, an IAsyncResult is being sent automatically to the ASP.NET and let it determine in the operation had completed, a new thread is being called from the thread pool and there is call to the End method (that we registered earlier, remmember?).
Jeff Prosise explains it all in
Friday, July 6, 2007
But during this process of optimization, at times I felt if Approach 1 was more optimized or Approach 2.
One such condition was when I used For loops to iterate through the items of collection.
Now heres the theory:
int indexArray = new int;
int total = 0;
for(int i = 0; i < indexArray.Length; i++)
total += indexArray[i];
int indexArray = new int;
int total = 0;
foreach(int i in indexArray)
total += i;
The advantage of a foreach loop over a for loop is that it is not al all necessary to know the number of items within the collection when an iteration starts. This avoids iterating off the end of the collection using an index that is not available. A foreach loop also allows code to iterate over a collection without first loading the collection in entirety into memory.
So herein we can safely assume that using foreach is an optimized approach rather than using a for loop.
NOW LETS MOVE TO THE OTHER SIDE OF IT:
If we closely look at the IL Code for the above two constructs:
cmp dword ptr [eax+4],0
mov ecx,dword ptr [eax+edx*4+8]
++icmp esi,dword ptr [eax+4]
Here, the comparision is done at two stages:
1. For the first run it is done only once to check if the counter is good to continue into the loop.
2. Inside the loop where it is exactly comparing and recalling the code.
This is very well optimized in the loop.
cmp esi,dword ptr [ebx+4]
cmp esi,dword ptr [ebx+4]
mov eax,dword ptr [ebx+esi*4+8]
mov dword ptr [ebp-0Ch],eax
mov eax,dword ptr [ebp-0Ch]
add dword ptr [ebp-8],eax
cmp esi,dword ptr [ebx+4]
Clearly the two syntaxes are different. There are some unwanted comparisions and some moves that are exactly not required. Thats because foreach treats everything as a collection and hence uses the code for the same which reduces the performance if it is not a collection and is a simple array only.
So still I am at indecision whether For is an optimized version or ForEach is??.............. :(
Wednesday, May 23, 2007
It basically compares one to one keywords and functions for both the languages.
It could be extremely useful for people who find it difficult to transit from VB.NET to C#
Tuesday, May 8, 2007
While this proved quite helpful, I felt a deep need of intellisense in VS.NET for my objects and hense decided to implement the same.
After a bit of googling and researching, I could find a possible way of implementing the same.
Here's how it goes.
As we all know, VS 2005 implements Intellisense using the XSD files. So the idea was to provide VS IDE with my own XSD so that it could use that XSD instead of the default one.
The process involved the following steps:
1. Create your custom XSD:
2. Make this XSD available to VS.NET IDE
3. Using the custom XSD
1. Create your custom XSD:
Since I was creating intellisense for HTML, I always required the default HTML intellisense and a few of my own. So I decided to use the XSD that VS IDE uses to provide intellisense in HTML view. Those XSDs could be found located at
Apart from these, there are a couple more changes that are required so that the IDE distinguishes between the two schemas.
<xsd:schema version="1.0" targetNamespace='http://schemas.microsoft.com/intellisense/MyCustomSchema'
vs:SuccinctFriendlyName="My Custom Schema">
SO now we are all done with the changes. We have a well made XSD to use with our IDE.
2. Make this XSD available to VS.NET IDE
In order to make this XSD available to VS IDE, we need to perform two steps.
- Firstly copy the XSD file to the location where the default XSDs for IDE are located.
- Secondly make a registry entry to register this schema with the IDE. For this, create a .reg file with the following text:
"Friendly Name"="My Custom Schema"
Running this script would add the information to the Registry and would register the schema to the VS.NET IDE.
One thing to note here is that I have hardcoded the name SCHEMA21. This was because on my machine, I had 20 Schemas already loaded. By default, VS.NET creats 20 schemas and hense we need to maintain the order to get the schema working. Hense the number.
3. Using the custom XSD
Once we close the IDE and reopen it, now this Schema would always be available to the IDE. For using this, goto the IDE, open the HTML designer and you would see a dropdown on the toolbar of the IDE for selecting TARGET SCHEMA FOR VALIDATION
I would thank Mikhail Arkhipov for the help he provided
Thursday, May 3, 2007
Just recently, working on Reporting system, I was required to create PDF reports out of HTML pages. I did those successfully using iTextSharp, a free online library for rendering PDFs from HTML.
In the process, I had to customize the particular HTMLs to contain several constraints and hense I had to validate them at the design level.
I found a great designer that did the trick. The well known Visual Studio.NET Designer. I decided to use that for my purpose as it did nost ofthe validations all by itself and I just had to add a few more of my custom ones.
Say for example, in my report, I was always required to have three DIVs in my tag in HTML, namely "HEADER", "FOOTER" and "DETAIL"
Now by default there is not such binding for user in case he is making it a regular HTML. But if it a Report then I had to put these validation checks.
For this I had to tweak the Designer to throw messages to the user if required.
The best way out seemed to be MACROS at that is what did the trick.
While creating any macro or a module in a macro project, we always have one Module in it "EnvironmentEvents". This module contains all events pertaining to all the events that occur in the Visual Studio IDE.
In the EnvironmentEvents module, choose the event
This event will be called every time a document is saved in Visual Studio IDE.
Here in you could have all your validations and all those will be thrown back to IDE using your own system of exception throwing, which could possibly be via MessageBox or some message in Output Window etc.
Now in order to get the items into Output window or task list, we can use the following piece of code.
Dim projItem As EnvDTE.Solution
Dim tw As TaskList = CType(win.Object, TaskList)
win = projItem.DTE.Windows.Item(Constants.vsWindowKindTaskList)
tw = CType(win.Object, TaskList)tw.TaskItems.Add("ErrorCategory", Document.Name, "Error Message", vsTaskPriority.vsTaskPriorityHigh, , True, Document.Name, -1, True, True)
Wednesday, April 18, 2007
Finally we got it to work. And in the mean time, I learnt quite a new concepts in the process itself. Be it Reflection, Be it Properties, Dynamically loading and calling assemblies, creating templates to use with Visual Studio IDE, tweeking IDE to implement my own Intellisense and many more..
Due to the lack of time, I could not update the place as frequently as before. So once I got some time out of work, I concentrated back on putting something up here.
So here is the first one from me:
The PROPERTIES in .NET are said to be faster than normal user defined functions.
But strange enough (to me atleast), the properties that are defined at the class level, at the grassroot level are implemented as functions itself.
Just to confirm the same.....you could create a simple class with a few properties defined. Compile it and then try checking the output in the ILDASM.
I came across this fact when I was playing around with Reflection and Invoking methods and getting properties to return values by just specifying the names of the properties as string.
Ususally any property in C# is written with two methods (get & set) and this property is in turn implements as two seperate functions. For example if we create a property called Prop1 with methods get and set defined, the they are at root level implemented as the following two functions:
1. GET - get_Prop1()
2. SET - set_Prop1()
I accidentally came across the fact when the compiler told me that I can not create function named get_Prop1 or set_Prop1..........as they already exist ?????????????
After a bit of exploration, here is the result .NET properties are implemented as functions itself. If so then how come using Properties be better than using Functions.
Well the .NET compiler are built up and are implemented in such a way so as to give preceeedence to these get and set functions over all other functions and hense they are faster. No special concerns about the same.
Hope this helps someone in there works.....
-- Ashutosh Vyas
Thursday, March 8, 2007
Just out of curiosity and a zeal to learn, I started out on making a small utility that would split a given PDF file into pages.
I did that successfully using iTextSharp, a free Open Source library for PDF generation.
Next step was to enhance this utility to read and merge different PDFs into one single unit based on any criteria given by the user.
After quite long I could finish that work and now the utility is ready with the following features:
1. Split PDF into single page PDF files.
2. Split based on criteria given by user.
3. Select multiple files to MERGE into one.
4. Merge files based on criteria by user.
Further to this, I had a look at another utility called PDFBox and I could read the elementary text out of the PDF as well.
So now we have a utility that has the power to Split / Merge or even read a PDF file into a text file/ text stream.
If anyone needs that please get in touch with me.
Friday, February 23, 2007
Currently I am done with the merging part and now the app merges the selected PDF files to generate single PDF with all the pDFs merged.
I am onto the second part: Splitting. I foresee myself completing the basic part in a day or two, i.e. it will take a single PDF and splitt it into multiple files with one page each (for ex- a PDF file with 10 pages will result in 10 PDF files with single page each.)
Now I am looking to make it more generic like you provide the resulting file name, location and each page in each file and the files are created on that basis.
Really Looking forward to it.
So wait and watch..................
Wednesday, February 14, 2007
This problem arises because most C programs use a library of routines called the standard time library (time.h). This library establishes a standard 4-byte format for the storage of time values, and also provides a number of functions for converting, displaying and calculating time values.
The standard 4-byte format assumes that the beginning of time is January 1, 1970, at 12:00:00 a.m. This value is 0. Any time/date value is expressed as the number of seconds following that zero value. So the value 919642718 is 919,642,718 seconds past 12:00:00 a.m. on January 1, 1970, which is Sunday, February 21, 1999, at 16:18:38 Pacific time (U.S.). This is a convenient format because if you subtract any two values, what you get is a number of seconds that is the time difference between them. Then you can use other functions in the library to determine how many minutes/hours/days/months/years have passed between the two times. If you have read the Tech Touch on Bits and Bytes, you know that a signed 4-byte integer has a maximum value of 2,147,483,647, and this is where the Year 2038 problem comes from. The maximum value of time before it rolls over to a negative (and invalid) value is 2,147,483,647, which translates into January 19, 2038. On this date, any C programs that use the standard time library will start to have problems with date calculations. This problem is somewhat easier to fix than the Y2K problem on mainframes, fortunately. Well-written programs can simply be recompiled with a new version of the library that uses, for example, 8-byte values for the storage format. This is possible because the library encapsulates the whole time activity with its own time types and functions (unlike most mainframe programs, which did not standardize their date formats or calculations).
So the Year 2038 problem should not be nearly as hard to fix as the Y2K problem was. An alert reader was kind enough to point out that IBM PC hardware suffers from the Year 2116 problem. For a PC , the beginning of time starts at January 1, 1980, and increments by seconds in an unsigned 32-bit integer in a manner similar to UNIX time. By 2116, the integer overflows. Windows NT uses a 64-bit integer to track time. However, it uses 100 nanoseconds as its increment and the beginning of time is January 1, 1601, so NT suffers from the Year 2184 problem.
Friday, February 9, 2007
Okay another interesting issue. Anyone ever wondered about the MaxLength property of a textbox in .NET.
Yeah by default, the MaxLength is 0 and 0 obviously in this case means INFINITE stating you can put any number of characters in the textbox.
Theoritically its perfect but practically speaking, it never takes characters more than 32K
Okay to the extent I got to know about it, here is the reason.
STRING.LENGTH -------- integer / int32
SO basically its the limitation of String Class and its length property (which is of Int32 type)
that limits the length of a textbox to type (Int32) and hence the reason for limitation of textbox maxlength.
Monday, February 5, 2007
Yesterday while supporting user @ Experts-Exchange, I came across a question asking if I could create an instance of a new FORM at runtime and show it on the screen.
Well thinking at the first instance, looks quite straight forward. Theoritically its like creating a new instance of an object and using its methods and properties.
But practically its not as easy as with any other object. Theoritically the below code shows what exactly he wanted and its exactly what we can do in general coding with any other objects.
String strFormName = "Form1";
Form frmNew = (Form)(strFormName); //A new form with the name "Form1"
So exploring through for the same at first hit it looked impossible and thats what I thought.
But then I was guided to an article on MSDN that explained using Activator.CreateInstance For all such tasks and it looked quite obvious.
Here's the code for it.
String strFormName = "Form1";
System.Reflection.Assembly asm = System.Reflection.Assembly.GetExecutingAssembly();
Form frmNew = (Form)asm.CreateInstance(asm.Getname.name + "." + strFormname, true);
Hope someday this would be of use to me as well ;-)
Monday, January 29, 2007
Wednesday, January 10, 2007
I have been assigned the task to look at the prospects of implementing ECLIPSE IDE for Report development.
All these days, I have known ECLIPSE as an IDE for development of classes and writing code in JAVA. Just a user friendly interface for writing code and nothing much. But here while exploring, I realized its actually a lot more than that BUT SADLY FOR JAVA DEVELOPERS.
Apart from a mere few plugins for development with C/C++ and C# there is not much of a support in ECLIPSE for other languages. Still looking into the report designer for ECLIPSE.
The report designer has an Easy to use GUI for Rapid development of Reports and quick visualization of the same.
Best part is that the underlying code is all XML for this report and hence a universal structure. But the problem is the schema it uses for interpreting all HTML and SCRIPTING involved.
It literally dumps all the HTML and all expressions and evaluations that we add to the reports into this XML and then it basically kinda executes this XML while showing the report.
I am still looking at the prospects of using it in my Framework project, but at the moment there looks nothing much in it.