Not signed in (Sign In)

Vanilla 1.1.9 is a product of Lussumo. More Information: Documentation, Community Support.

  1.  
    Many questions and answers contain images. Unlike in StackExchange 2.0, the images in MO are links via URLs to various web servers scattered over the globe. (In SE2.0, the images are uploaded to the SE site.) If a server is temporarily down (which happened to my server after a weekend power failure), the images on that server do not show up in MO. When a user moves institutions, perhaps they will temporarily or even permanently lose access to the MO images they've posted, and so therefore they will be lost to MO.

    So I am wondering if at some point MO should capture and store the images users post? Otherwise MO does not retain a complete record of what has been posted.
    • CommentAuthorMariano
    • CommentTimeMay 2nd 2011
     

    +1

  2.  

    Duly noted, hehe.

    Of course the usual applies --- we have no control over the software we run, and, as discussed on another thread here, migrating to SE 2.0 looks unlikely for now.

    • CommentAuthorMariano
    • CommentTimeMay 2nd 2011
     

    Maybe some enterprising soul could periodically scan the modump, search for links to images, download them somewhere stable and (here the unlikely magic occurs...) edit the database with links to the new copy.

  3.  

    If you have posts.xml from the database dump, the following command

    grep -o "&lt;a href=&quot;[^&]*&quot" < posts.xml | sed -e "s/&lt;a href=&quot;\(.*\)&quot/\1/"
    

    will give you a list of all links. (Sorry, my bash scripting doesn't extend to awk, or whatever one is really meant to use here.) After that you'd want to choose things likely to be images, and download them. The miracle still comes later.

    • CommentAuthorMariano
    • CommentTimeMay 2nd 2011 edited
     

    A low tech solution is:

    grep -o "&lt;img src=&quot;[^&]*&quot" posts.xml | sed -e "s/.*&quot;\(.*\.\(png\|gif\|jpg\)\)&quot/\1/" | xargs -1 wget
    

    LATER: In fact, quite a few of the img tags are to latex.mathoverflow.net, which one does not want, so

    grep -o "&lt;img src=&quot;[^&]*&quot" posts.xml 
        | sed -e '/latex.mathoverflow.net/d' -e 's/&lt;img src=*&quot;\(.*\)&quot/\1/' 
        | xargs -n 1 wget
    

    is a better alternative.

    By the way, with the last dump

    grep -o "&lt;img src=&quot;[^&]*&quot" posts.xml 
        | sed -e '/latex.mathoverflow.net/d' -e 's/&lt;img src=*&quot;\(.*\)&quot/\1/' 
        | xargs -n 1 HEAD -d -t 3
        | sort 
        | uniq -c
    

    (which uses a short timeout) returns

        505 200 OK
         40 204 No Content
         55 403 Forbidden
         19 404 Not Found
          1 404 NOT FOUND
          3 405 Method Not Allowed
          4 500 Can't connect to cs.smith.edu:80 (connect: timeout)
          1 500 Can't connect to img843.imageshack.us:80 (connect: timeout)
          1 500 Can't connect to math.huji.ac.il:80 (connect: timeout)
          3 500 Can't connect to maven.smith.edu:80 (connect: timeout)
          1 500 Can't connect to upload.wikimedia.org:80 (connect: timeout)
          5 500 Can't connect to www.freeimagehosting.net:80 (connect: timeout)
          2 500 Can't connect to www.math.hawaii.edu:80 (connect: timeout)
          1 500 Can't connect to www.maths.ed.ac.uk:80 (connect: Connection refused)
         22 500 read timeout
          5 501 Protocol scheme 'https' is not supported (Crypt::SSLeay or IO::Socket::SSL not installed)
    

    (

  4.  
    @Mariano: Note that many, if not most images, are included via this syntax:
    ![alt text][1] .... [1]: URL. (I tried to embed a real example here but then it displayed the image!)
    • CommentAuthorMariano
    • CommentTimeMay 3rd 2011
     

    Grepping the file for the string ![ gives me 9 occurrences, which are not links. Either I am not escaping the pattern correctly (I never remember what to excape when using what tool :/ ) or the links are stored in one format and presented to the user (when editing, say) in another.

  5.  

    @Joseph,

    actually, by looking at posts.xml we're actually looking at the final rendered HTML for posts. That is, the markdown syntax for including images that you mention has already been converted to standard HTML <img/> tags.

  6.  
    Thanks, Scott, I have never looked at posts.xml, and I shouldn't have made any remark from ignorance. Mariano, my apologies for the wild goose chase!
    • CommentAuthorMariano
    • CommentTimeMay 3rd 2011
     

    Scott, so the dump is not really a dump but the result of htmlifying the markdown source?

  7.  

    @Mariano: posts.xml contains the htmlified versions of the posts, but posthistory.xml contains the markdown source.

  8.  

    This is all an artefact of how the various database tables are actually used in the underlying software (well, based on our limited understanding of that software!) The database table that posts.xml comes from contains all the information required to actually render the pages corresponding to questions, and in particular only needs to htmlified content, while the actual source, which is much more rarely needed, is stored in a separate database table, from which posthistory.xml is generated.

  9.  
    I hesitate to put this in the "Migrate to SE2.0" thread, because it is a very minor issue. But, if we do migrate, would all the images in all the past MO posts be copied and stored on an SE server, or would they remain <img src=link>s to locations all over the web? The former would be preferable, I think...
    • CommentAuthorWillieWong
    • CommentTimeJul 18th 2011
     

    @Joseph: I'm sure this is one of the things that if we ask for it, they can do it. Provided the link in question is not already dead...