I've built this little dataviewer http://mediabeez.ws/htmlMicroscope/ http://code.google.com/p/htmlmicroscope/downloads/list It allows you to view big programming-language-level arrays in the browser efficiently. but as it turns out, browsers are badly misbehaving on xxl datasets. the problem in a nutshell is that the easiest way i can solve a problem of making 40+ meg software arrays, encoded from the server into something like this <div style="display:none;"><!-- [40 meg+ data] --></div> I then want to scan through that data using js .innerHTML.substr() on the div. Back in programming class, they explained how any .substr() should be able to fetch small pieces of very large strings. So here's your "challenge". I want to be able to put up to 1 gigabyte of data in my commented-out hidden-div, and scan through it 1kb - 10kb as fast as possible. Once i'm done building up the DOM for it (semi-recursively, with plenty of ..setTimeout()s), i want to free the memory of the original data (so delete the hidden div). Lets face it, lost of ordinary computers (esp those of developers) are at 2 to 4 gig, leaving at least 1 gig free to the browser. My current test at 40meg of hidden data, ie8 / winxp, 2 gig memory, a) .getElementById ('validID').innerHTML = 'shortString'; does not work anymore on such a page. b) the "developer tools" are a long-overdue good improvement, but they tend to freeze up my system even just looking at the console, doing nothing else on a page like that. a+b i dont have a way to even check if your .innerHTML.substr even works let alone at what speed. btw; ff/ubuntu = about 700ms/call, ff/windows about 150ms (still way to slow when asking for just 100 bytes per call eh) and chrome has no console.log/trace and like IE (and all FFs) refuses to update another div's.innerHTML with even a short string. c) btw, did you think to implement console.trace()? it should return a full js function trace with full parameter content per function call. hope you can do something about these boundaries ---------------- This post is a suggestion for Microsoft, and Microsoft responds to the suggestions with the most votes. To vote for this suggestion, click the "I Agree" button in the message pane. If you do not see the button, follow this link to open the suggestion in the Microsoft Web-based Newsreader and then click "I Agree" in the message pane. http://www.microsoft.com/communitie...&dg=microsoft.public.internetexplorer.general
Developer-specific resources include: MSDN IE Development Forums <=post such questions here instead http://social.msdn.microsoft.com/forums/en-US/category/iedevelopment/ IE Developer Center http://msdn.microsoft.com/en-us/ie/default.aspx Learn IE8 http://msdn.microsoft.com/en-us/ie/aa740473.aspx HTML and DHTML Overviews and Tutorials http://msdn.microsoft.com/en-us/library/ms537623.aspx and Cascading Style Sheets (CSS) http://msdn2.microsoft.com/en-us/ie/aa740476.aspx Expression Web SuperPreview for Internet Explorer (free, stand-alone visual debugging tool for IE6, IE7, and IE http://www.microsoft.com/downloads/...FamilyID=8e6ac106-525d-45d0-84db-dccff3fae677 Expression Web SuperPreview Release Notes http://www.microsoft.com/expression/products/Web_SuperPreviewReleaseNotes.aspx Validators: http://validator.w3.org/ http://jigsaw.w3.org/css-validator/ rene7705 wrote: > I've built this little dataviewer > http://mediabeez.ws/htmlMicroscope/ > http://code.google.com/p/htmlmicroscope/downloads/list > > It allows you to view big programming-language-level arrays in the browser > efficiently. > > but as it turns out, browsers are badly misbehaving on xxl datasets. > > the problem in a nutshell is that the easiest way i can solve a problem of > making 40+ meg software arrays, encoded from the server into something > like > this > <div style="display:none;"><!-- [40 meg+ data] --></div> > > I then want to scan through that data using js .innerHTML.substr() on the > div. > > Back in programming class, they explained how any .substr() should be able > to fetch small pieces of very large strings. > > So here's your "challenge". I want to be able to put up to 1 gigabyte of > data in my commented-out hidden-div, and scan through it 1kb - 10kb as > fast > as possible. > Once i'm done building up the DOM for it (semi-recursively, with plenty of > .setTimeout()s), i want to free the memory of the original data (so delete > the hidden div). > > Lets face it, lost of ordinary computers (esp those of developers) are at > 2 > to 4 gig, leaving at least 1 gig free to the browser. > > My current test at 40meg of hidden data, ie8 / winxp, 2 gig memory, > > a) .getElementById ('validID').innerHTML = 'shortString'; > does not work anymore on such a page. > > b) the "developer tools" are a long-overdue good improvement, but they > tend > to freeze up my system even just looking at the console, doing nothing > else > on a page like that. > > a+b i dont have a way to even check if your .innerHTML.substr even works > let alone at what speed. > btw; ff/ubuntu = about 700ms/call, ff/windows about 150ms (still way to > slow > when asking for just 100 bytes per call eh) > and chrome has no console.log/trace and like IE (and all FFs) refuses to > update another div's.innerHTML with even a short string. > > c) btw, did you think to implement console.trace()? > it should return a full js function trace with full parameter content per > function call. > > hope you can do something about these boundaries > > ---------------- > This post is a suggestion for Microsoft, and Microsoft responds to the > suggestions with the most votes. To vote for this suggestion, click the "I > Agree" button in the message pane. If you do not see the button, follow > this > link to open the suggestion in the Microsoft Web-based Newsreader and then > click "I Agree" in the message pane. > > http://www.microsoft.com/communitie...&dg=microsoft.public.internetexplorer.general