Currently I was testing my quick-sort program with a large number of data. It performed well when the data were random but when I used sorted or reversely sorted data then after some time the program stopped showing a message of segmentation fault. I was wondering why such type of exception happened. I was checking whether it was accessing any illegal memory location or not. But I didn't find anything of this kind. Then I found that default memory size for system stack is 8M.B in my memory and it was not sufficient for the program to operate on the given data. Then I found the
ulimit command in shell that enhances the memory size for the system stack and it helped me to successfully run my program.
ulimit provides control over the resources available to the shell and to processes started by it, on systems that allow such control. The
-H and
-S options specify that the hard or soft limit is set for the given resource. A hard limit cannot be increased by a non-root user once it is set; a soft limit may be increased up to the value of the hard limit. If neither
-H nor
-S is specified, both the soft and hard limits are set. There are some options that I found useful:
-a All current limits are reported
-b The maximum socket buffer size
-d The maximum size of a process's data segment
-e The maximum scheduling priority ("nice")
-f The maximum size of files written by the shell and its children
-s The maximum stack size
-T The maximum number of threads
If limit is given, it is the new value of the specified resource (the -a option is display only). If no option is given, then -f is assumed. Values are in 1024 byte increments. For example if you want to assign 200M.B to your system stack then you need to type:
[subhendu@localhost ~]$ ulimit -Ss 204800
To know the current limits type:
[subhendu@localhost ~]$ ulimit -a