Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Scenario:
I have multiple computers networked together with one of them being the host (server). The server has a share setup that all the other computers create files and write too (simple enough). For simplicity sake let's call the server, server and the other computers comp1, comp2 and comp3 etc.
The server checks the share every 15 minutes querying the files which are text files and scrubs them. The files from comp1 comp2, and comp3 stay open from anywhere between 1 hr 1.5 hrs (encryption and decryption). During the server check on the files I need to reliable determine if the file is still open and if it is don't do anything with it and if it isn't then we are clear to process the file; yes the files do remain open and written to the whole time. The files are never closed until the process is completed.
Question:
What would be the best way to approach the detection of an open file like this?
Side quest Question: will 'lsof' be able to tell me if the file is still open, even if it is on the share and created and written to by another computer?
Okay, you "share" these files.
Share how? What software are you using? (SAMBA, NFS, Something else?) What versions at the host and the client?
While on that, what Operating Systems are involved and what versions?
What are the server settings that pertain? What are the client settings?
Note in my signature the link to "...how to ask a question" and have a read, it will help you going forward.
Right now we have very little to go on to determine what you are really doing, what you WANT to do, and why?
Is there a good reason for not determining the LONGEST period a client could have a file open (say, 12 hours) and just deleting the ones twice that old and older (24+ hours)?
Or, if possible, have the clients clean up after themselves automatically?
Or having the clients send a "flag" file indicating that they are done with that file and no longer using it?
SO MANY possibilities depending upon what you are really doing!
PS I have had good results with LSOF locally, but some kinds of processing can fool it. It is generally easier to fool it on a share with access from OUTSIDE the local processing queues, but on some sharing software that passes that information well it sometimes works BETTER! I would test extensively before trusting it in production. Better to explicitly send notifications between the local and remote processing so you do not NEED to trust LSOF.
There was a reason for being vague, which is why I tried to keep the question related to Linux. Including certain information for example The server is really a TrueNAS Scale system which is Linux based (this is technically outside of the scope of this platform). All computers utilize Linux based software. The Longest period a client could have a file open does not apply because it is in real-time and as soon as it's done I need to work the file (Encryption and Decryption) the time varies. Due to the nature of the work being done, I am unable to provide in-depth information on the process. Setting a flag is not an option because the program is an independent application that I have no control over which is why I have to check the file to determine if it is still open.
With that additional information are you able to provide any additional options?
Assuming: Linux base to Linux base systems utilizing a NFS share can the "server" determine if a file is open utilizing lsof, if not do you have any alternatives that may work?
I am also aware that with the information that I've provided a definitive answer may not be possible.
Again thanks for the information and help.
Last edited by madhatt30; 05-05-2024 at 03:21 PM.
Reason: typo
With that additional information are you able to provide any additional options?
Yes, more information, more details will help us give more useful advice.
Quote:
Originally Posted by madhatt30
Assuming: Linux base to Linux base systems utilizing a NFS share can the "server" determine if a file is open utilizing lsof, if not do you have any alternatives that may work?
In general there is a file lock mechanism you can use.
The server has a share setup that all the other computers create files and write to
Quote:
During the server check on the files I need to reliable determine if the file is still open and if it is don't do anything with it and if it isn't then we are clear to process the file; yes the files do remain open and written to the whole time. The files are never closed until the process is completed.
Quote:
Assuming: Linux base to Linux base systems utilizing a NFS share can the "server" determine if a file is open utilizing lsof, if not do you have any alternatives that may work?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.