i want to read the Content of a flat text file. The datafields (corresponding to my entityfileds) are seperated with an specific character.
how to read is not the Problem. i use api call to open the file, read one (1) line of the Textfile and by reaching EOF i close the file (again with c++ api).
Textfiles with thousands , for example 2500 lines, Needs 10-15 minutes to store in the database. that is too Long i think...
is there another way possible to reach a better Performance?
thanks in advance. Tobias h.
I assume you are working with Plex.
I suppose you are reading a text file and writting a separate regular database. Assuming it, I have an initial question: what platform/architecture are you using? It is a SQL database? is it a System i (AKA AS400) box?
Hi, right. I use plex and i want to write the readed lines in a database. The structure of my entity (database table) corresponds to the content of the textfile. The content is seperatet with a character.
Actually i use ms sql database. I can also write on iseries (as400). But in my actual Tests i use sql.
Clients and server are both c++ in sql variant. Client runs on local Notebook and sql server on a Network virtual machine with windows server os
Let me say that I only use Microsoft SQL Server occasionally, and indeed other colleagues here could help on it better than I. In my experience a process of that type could be slow or very slow when writting to the database. I have two other questions to you:
1, what will be your production scenario? SQL Server on Windows Server, DB2 on System i, or a mix of them?
2, do you use regular patterns from Plex (block fetch, single fetch, process instance) or user defined APIs?
I use the regular patterns with checkedupdate, insertrow etc. For reading the textfile i use c++ api source Codes. My production Szenario will be ms sql on windows server and windows 8 Notebook which reads the textfile. Network is normal gigabit Network.
From your explanation, indeed you are focused on SQL Server and Windows Server. In this scenario I'm not your better guidance...However, a few things:
You mention checkedupdate function. If you are doing your updates with that pattern, it will be slow "by nature". In this case, I would suggest to use optimistic updates, and some sort of control of fails. I do not know the complexity of involved tables and processes, but if really complex operations are made, check them with same operations making pure sql. If you see better response with plain and optimized sql, probably you should do a refactoring of your Plex processes, or change them to use sql as source code. Not the better for the model, but sometimes the practical solution.