hexagon logo

Python Marc automatization

Hello, I'd like to ask the following: I created a Python script for automatically entering values into tables in Marc Mentat from a CSV file. This CSV file contains 500 columns, and each column has exactly 50,000 rows of values. I only import the columns into Marc Mentat. Every time I run my Python script, the memory usage of MentatOGL.exe in the Task Manager starts increasing rapidly (about 1 MB/s during the input process). I start at around 2900 MB, and once it reaches approximately 4220 MB (which is around the 17th table), it stops adding values and the memory growth slows down from 1 MB/s to about 0.1 MB/s, effectively causing it to freeze. Can you advise me on why this is happening and how to fix it?

I am attaching my Python script with the modified table.mentat_script.rar

  • Hello Michael Mojžiš,

    I hope you're doing well.

    I wanted to address the memory usage problem you've encountered with your Marc Mentat script. The issue was due to processing each data point individually, which led to high memory consumption.The script was encountering memory limitations due to the way data points were being sent to Mentat. Each point was processed individually, leading to a high number of API calls and increased memory consumption.

    To resolve this, you can try and update the script to handle data in batches. This means that instead of sending each point one by one, the script now collects a batch of points and sends them together. This reduces the overhead from multiple API calls and helps maintain efficient memory usage.

    Let me know how it works.

    Regards,

    Amol Kumbhar

  • Hello Amol Kumbhal,

    I have tried the script with the batch function several times, but the errors I mentioned earlier have not disappeared—the batch function did not help.

    I read that when I send data manually (meaning that I copy them from Excel using Ctrl+C and paste them with Ctrl+V via Edit Table → Add, then press Enter), this method is optimized. When the data is being inserted, I can see that memory usage fluctuates, rising and falling, which indicates internal memory is being freed.

    However, when I run a script that inserts the data automatically, it is not optimized. The GUI immediately freezes, and I can only see from the command line that the tables are being completed (because in the script, I included a command to print which table is being modified and then another message confirming the table is complete once points are added).

    As I mentioned before, the memory usage keeps increasing without fluctuation, whether the batch function is used or not (I tried assigning different values to this function, ranging from 100 to 20,000). The memory keeps rising without fluctuation, and it always reaches around 4,200 MB, at which point the point insertion process seems to freeze and drastically slow down.

    There is no mention of this issue in the manual, nor in the Python documentation for Marc Mentat, and I don’t know how to solve it. I am sending you the latest update with the batch function in the attachment.

  • Hello Michael Mojžiš,

    I understand you’re having some trouble with memory issues and the Mentat GUI freezing up. Your script's way of handling large data batches and frequently restarting connections might be causing these hiccups.

    In the updated approach, we start by reading the CSV file line by line and converting each value carefully. Skipping over the header and refining the data before using it is a good practice—it ensures you're only dealing with clean data. If there's any problematic data, we just print a warning and skip it, so we can keep things moving smoothly.

    For each set of data, we create a new table in Mentat, setting it up with "temperature" and "time" variables. By adding data points directly to the table, we manage memory usage much better. We also keep a single, stable connection to Mentat throughout, which helps avoid the issues caused by opening and closing connections repeatedly. As we go through each table, we make sure to fit the data properly.

    I can’t share the exact script with you directly, but I hope this explanation helps you get things sorted.

    Regards,

    Amol Kumbhar

  • Hello, Amol Kumbhar. So, my conclusion is: In the Marc Mentat 2010 version, my script works perfectly, flawlessly, and without any issues. However, in the 2020 version, I am experiencing the errors I mentioned in previous messages, and sending the data in batches does not help.

    My conclusion is that the backend of the older Marc Mentat version was better suited for inserting points than the newer versions.In any case, thank you, Amol Kumbhar, for your help.