When defining parameters it’s possible to define them either as default (none), var, out or const. The difference between all these is the way the value is committed to the called function. The cue’s for this are:

– Call-by-reference
– Call-by-value

The above mentioned keywords (none, var, out, const) define the way variables are called:

// Call-by-value
procedure ParamDefault(AValue: Integer);

// Call-by-reference
procedure ParamVar(var AValue: Integer);
procedure ParamOut(out AValue: Integer);
procedure ParamConst(const AValue: Integer);

But why bothering defining parameters as call-by-reference?
By defining a parameter as call-by-reference it’s not the value that is committed, but the address of the value. This can be extremely useful, when the value itself is many times bigger than the pointer (pointer-sizes depend on the architecture).

So let’s look at an example using an Integer-parameter:

{ Call-by-value

Size of Integer is 32-Bit }
procedure CallByValue(AValue: Integer);

{ Call-by-reference

Size of Integer is:
 - 32-Bit system: 32-Bit
 - 64-Bit system: 64-Bit }
procedure CallByReference(const AValue: Integer);

In this example we can see that defining the parameter as call-by-reference it gets even worse. But when we take a type whose size is bigger it quickly becomes clear, why it’s sometimes better to use call-by-reference.

// Size of TSize = 96-Bit
TSize = record
  FWidth: Integer; // 32-Bit
  FHeight: Integer; // 32-Bit
  FDepth: Integer; // 32-Bit
end;

// Size of Sizes = 288-Bit (96-Bit * 3)
var
  Sizes: Array[0..2] of TSize;

Types like records (depending on the fields) and arrays can use quite a big amount of memory where it’s more efficient to use call-by-reference.

The same goes for String. String essentially is an object, but when you commit it to a function as parameter you don’t commit the address like with every other object. Instead it creates a copy of the String in the memory and commits the address.

So a general recommendation is, to use call-by-reference (var, out, const) for record-, array- and String-parameter whenever it’s possible to save memory and performance.

 

TL;DR

Use const/var/out parameters for Arrays, Records and Strings whenever it’s possible.

The try-finally block is awesome. Especially for creating and freeing objects. But what do you all do when you want to create multiple objects in a row? Create a nested try-finally for ever object you create?

Object1 := TObject.Create;
try
  Object2 := TObject.Create;
  try
    ...
  finally
    FreeAndNil(Object2);
  end;
finally
  FreeAndNil(Object1);
end;

This is obviously quite an overhead when having multiple objects. So let’s create every object at the beginning and free all in the finally-block, right? Hypothetically asking for if it is right should already give you the hint that this is obviously wrong or rather even more bad.
For understanding purposes have an example:

Object1 := TObject.Create;
Object2 := TObject.Create;
try
  ...
finally
  FreeAndNil(Object2);
  FreeAndNil(Object1);
end;

At first this looks like a good idea. We have only one try-finally for as many objects we want to create, but here hides a possible memory leak.
Consider this; the first object got created successfully, but the second one raised an exception while creating. At this point we still are outside the try-finally-statement so we aren’t going to free the object that was previously successfully created.
Have this kind of bug a dozen times and you can say goodbye to your memory or even to your application because the memory ran out for it.

So lets just move the creation of the objects inside the try-finally! Well, that idea would be even more bad.
Variables aren’t initialized so we can assume that the objects point to some garbage. Now one of the objects can’t be created and an exception is raised, but the finally-block gets executed. That’s what we want! But then suddenly we get an access violation because of an object pointing to garbage…

And here comes the final necessary part for creating multiple objects with one try-finally. We just initialize it as nil, because the .Free method checks if the object is <> nil so we can execute this without worrying about errors.

Object1 := nil;
Object2 := nil;
try
  Object1 := TObject.Create;
  Object2 := TObject.Create;
  ...
finally
  FreeAndNil(Object2);
  FreeAndNil(Object1);
end;

With this method of writing one try-finally-statement for multiple objects you can have less overhead, because of no more nested try-finally-statements, and have a better overview of your objects.

Hint: Free the objects at reversed order to minimize bugs because of dependencies between the objects.

TL;DR

Use one try-finally for multiple objects by initializing every object with nil and creating them within the try-block.

As a Delphi developer, you don’t get around objects. Creating and freeing them is an essential part of every Object Pascal based application. But now here comes a random guy telling you that you made your whole life a serious mistake at object management! Unbelievable, but hear me out, because you are going to thank me later on.

As you probably already know an object variable is essentially a pointer to the address where the memory for your instance is reserved. So creating an object returns the address and freeing an object marks the reserved memory as “not in use anymore”, but the object variable still contains the address to where the object originally was. And here comes the horror of object management in Delphi!
When you forget to nil the variable after freeing than you don’t know if it’s still valid, but that’s not even the worst that could happen. Imagine you have a business application where it’s critical that everything runs perfectly, but then out of nowhere your objects returns a number you can’t explain. It’s nothing you have ever seen and no one in your company can help you. Even worse; you can’t replicate the bug, but your customer/the user still gets this bug from time to time corrupting his data. I will leave it to your imagination what can happen when you have a 1 instead of a 0 in an application that decides between life and death.
But why can something like that happen just because I forgot to nil an object after freeing it?

The thing is that your program doesn’t care if the reserved memory of your object is valid or not. When it’s able to get what it wants, then it’s happy even if the memory got reserved and overridden by another instance.
This typically results in the well known access violation of reading/writing at address 80070010 (or something similar) and when you see something like this you really should freak out! This is obviously a bug where the application tries to access the memory of an object that got freed and it’s a miracle you accidentally found this because it’s completely random (well, not really random, but depending on your application you can interpret it as random) if this access violation happens.

Hint: An easy way to differentiate between a null-pointer access violation and an access violation because of freed memory is by looking at the address. When the address is a really small number (something like 00000004) then it’s a null-pointer access violation. Otherwise, it’s an access violation because of freed memory.

But how can I fix this memory madness?!
— Random stranger on the Internet

Forget the idea of using .Free! Yeah, it’s the time of object oriented programming, but no one wants bugs or write two lines of code when they can just write one.
So instead of writing

Object.Free;
Object := nil; // <- considering you even wrote this line

Just write

FreeAndNil(Object)

 

The thing is that FreeAndNil does nothing more than checking if your object is <> nil and if it is it calls .Free and sets it := nil.

Conclusion

So here is my advice; use FreeAndNil all the times when it’s possible. It’s not going to change the logic but help you find bugs before you deliver them out.

 

TL;DR

Use FreeAndNil(Object) instead of Object.Free to prevent data corruption and bugs that are gonna break your mind!