patterncppMinor
Disk enumeration and file listing
Viewed 0 times
enumerationfiledisklistingand
Problem
I am writing C++ code to enumerate the whole HDD and drive listing. However, it takes more than 15 minutes to complete the disk enumeration of all drives (HDD capacity of 500GB) and compile the response in a binary file.
However, I have a 3rd party executable which gives me the listing of the whole disk in just less than two minutes. Can you please look into my code and suggest some performance improvement techniques?
However, I have a 3rd party executable which gives me the listing of the whole disk in just less than two minutes. Can you please look into my code and suggest some performance improvement techniques?
EnumFiles(CString FolderPath, CString SearchParameter,WIN32_FIND_DATAW *FileInfoData)
{
CString SearchFile = FolderPath + SearchParameter;
CString FileName;
hFile = FindFirstFileW(SearchFile, FileInfoData); // \\?\C:\*
if (hFile == INVALID_HANDLE_VALUE)
{
// Error
}
else
{
do
{
FileName = FileInfoData->cFileName;
if (FileInfoData->dwFileAttributes & FILE_ATTRIBUTE_DIRECTORY)
{
if (! (FileName == L"." || FileName == L".."))
{
// Save the Folder Information
EnumFiles(FolderPath + FileName +(L"\\"), SearchParameter,FileInfoData);
}
}
else
{
// Save the File Parameters
}
} while (FindNextFileW(hFile, FileInfoData));
}
FindClose(hFile);
}Solution
Can you please look into my code and suggest me some performance improvement techniques.
No -- it doesn't work like that. While there are some obvious things you may catch visually (like observing string comparisons at each iteration) every time you optimize you should start by profiling your execution, then isolating the parts that take the most time, then optimizing, then profiling again.
For example, it's possible that optimizing in this case is not removing or changing something you "do wrong" but splitting your hard drive into chunks and parallelizing the effort.
Either way, first, profile the execution and identify the worst offenders.
No -- it doesn't work like that. While there are some obvious things you may catch visually (like observing string comparisons at each iteration) every time you optimize you should start by profiling your execution, then isolating the parts that take the most time, then optimizing, then profiling again.
For example, it's possible that optimizing in this case is not removing or changing something you "do wrong" but splitting your hard drive into chunks and parallelizing the effort.
Either way, first, profile the execution and identify the worst offenders.
Context
StackExchange Code Review Q#40560, answer score: 6
Revisions (0)
No revisions yet.