I am currently testing the development of a desktop application.
I need to interact with a server through a TCP socket, which, for my test setup, instantly communicates back data contents transmitted by the client (the client being unique). Client side (Titanium).
I create an object TCPSocket with the createTCPSocket function, I assign a callback on OnRead, OnWrite, ontimeout and onReadComplete events. After that, I connect this object to the server with the connect function.
The connection is done properly (I get some server logs). Server-side, it turns out to be pending datas from the client using a function "select" (server written in c)
If I do a write command immediately after the connect on the socket, I get the contents of write command through the OnRead event. Server-side, I can see in the logs that the select is released and the data sent. Then I do a write command again on the socket and the server does not receive anything any more, so I do not retrieve anything on the OnRead event.
If I do a write command a few time after the connect, I have an answer on the OnRead event with a – at least - 7 seconds delay. (The server receives the data after 7 seconds delay according to tcpdump). Then I do a write command again on the socket and the server does not receive anything any more, so I do not retrieve anything on the OnRead event.
I noticed that the onTimeout event is often triggered. Regularly, when I do the first write command, the onWrite event is triggered after the ontimeout event, then comes the OnRead event.
My questions are: - Why is there a latency on the write command that is not done immediately after the connect ? - Why doesn’t a second write command to the socket work? Is it necessary to close the socket and to reset the connection? - Should we change the server so that it works with two sockets like "http://pastie.org/690072" ? - What is the point of the timeout on the TCP socket and is there a way of bringing it unlimited?
Think you can help? Login to answer this question!