Hi, I've built two new SL4.4 servers with /var an LV of 2Gb size and ext3. After running through some server testing, I found I quickly ran out of inodes on /var with 64% of the filesystem used but 100% of inodes used. df -i shows: 262144 256844 5300 98% /var I've been using Linux now for 14 years and this is the first time I've ran out of inodes. These are currently test servers so it's no big deal, but if these servers were in production and this happens, what can be done? The mke2fs man page shows: -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldnât be smaller than the blocksize of the filesystem, since then too many inodes will be made. Be warned that is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter. and: -N number-of-inodes overrides the default calculation of the number of inodes that should be reserved for the filesys- tem (which is based on the number of blocks and the bytes-per-inode ratio). This allows the user to specify the number of desired inodes directly. So it seems that the 262144 value is inadequate and I'll have to specify more when building the filesystem, but I would think it should be an easy task to add inodes when in a production environment using ext2online or similar, which doesn't seem the case. It seems if in production, the only option would be to backup the filesystem, reformat it with a higher -N value, restore the data. Not a good solution at all as production servers almost never get rebooted my end (they're HA clustered). Does anyone have any other options? or has anyone experienced this problem before? I'm really after knowledge to know if inodes can be given to a filesystem while the filesystem is online and mounted. Thanks. Michael.