|
Best Porn Sites | Live Sex | Register | FAQ | Members List | Calendar |
Tutorials Step by step Guides and How to's with screengrabs. |
|
Thread Tools | Display Modes |
April 4th, 2018, 03:34 PM | #1 |
Junior Member
Join Date: Aug 2016
Posts: 6
Thanks: 5
Thanked 20 Times in 5 Posts
|
How to pull images from this site. Traditional methods don't work.
http://online.pubhtml5.com/vfof/guzu/#p=1
So I've tried all the methods that I'm aware of for bulk download but none of them have worked. I had to use Chrome developer view to find the source files and then click and save one at a time. Using a bulk image downloader extension didn't work. It wasn't able to see any of the image files. Any ideas? |
|
April 4th, 2018, 04:37 PM | #2 |
Blocked!
Join Date: Jan 2008
Location: HH
Posts: 1,963
Thanks: 115,040
Thanked 32,801 Times in 1,955 Posts
|
In cases like this I use wget. A command line tool available for practically every operating systems.
Using bash it can be invoked in a for loop: HTML Code:
for x in $(seq 1 148); do wget http://online.pubhtml5.com/vfof/guzu/files/large/$x.jpg; done HTML Code:
wget http://online.pubhtml5.com/vfof/guzu/files/large/1.jpg wget http://online.pubhtml5.com/vfof/guzu/files/large/2.jpg wget http://online.pubhtml5.com/vfof/guzu/files/large/3.jpg .... wget http://online.pubhtml5.com/vfof/guzu/files/large/148.jpg Maybe there are also tools with a nice GUI out there to do this. Addendum: I forgot the -i param. You can create a text file containing urls and download them with HTML Code:
wget -i myurls.txt Last edited by halvar; April 4th, 2018 at 04:41 PM.. Reason: forgot -i |
April 4th, 2018, 04:55 PM | #4 |
Blocked!
Join Date: Jan 2008
Location: HH
Posts: 1,963
Thanks: 115,040
Thanked 32,801 Times in 1,955 Posts
|
Mac is even easier, since you have a bash terminal and curl!
Run Terminal and type HTML Code:
curl --version If it is installed then execute HTML Code:
for x in $(seq 1 148); do curl -o $x.jpg http://online.pubhtml5.com/vfof/guzu/files/large/$x.jpg; done |
April 4th, 2018, 06:31 PM | #5 |
Junior Member
Join Date: Aug 2016
Posts: 6
Thanks: 5
Thanked 20 Times in 5 Posts
|
That worked great Halvar!
Just so i can educate myself, would you mind translating that code a bit. It looks like you are telling it, "whenever you see 'x' after $, write a sequential number starting at 1 and ending at 148." do curl -o (is this one command or is this 2 different sections? ie. do curl and -o) Then you use the URL but substitute the page number with "$x". I know some basic html/css programming but that's about the extent of my code knowledge. Would there be a way to do this if each image had a name and not a number? |
April 4th, 2018, 07:06 PM | #6 |
Blocked!
Join Date: Jan 2008
Location: HH
Posts: 1,963
Thanks: 115,040
Thanked 32,801 Times in 1,955 Posts
|
HTML Code:
for x in $(seq 1 148) do This is because Code:
seq 1 148 $x is a placeholder for the current value. Code:
curl -o $x.jpg http://online.pubhtml5.com/vfof/guzu/files/large/$x.jpg; The "-o $x.jpg" is an option meaning save this as 1.jpg, 2.jpg and so on. A more simple example printing the numbers 3 to 5: Code:
for foo in $(seq 3 5); do echo $foo; done; Code:
for foo in $(seq -w 3 10); do echo $foo; done; Code:
for v in foo bar "foo bar"; do echo ${v}; done; * Sometimes the placeholder has to be in curly braces ${v} Just toy around a bit to get the hang of it. I am not a bash scripting expert myself, but I often find it rather useful. Here is a very good documentation: http://tldp.org/LDP/Bash-Beginners-Guide/html/ |
April 4th, 2018, 07:25 PM | #7 |
Junior Member
Join Date: Aug 2016
Posts: 6
Thanks: 5
Thanked 20 Times in 5 Posts
|
That is super helpful Halvar. I appreciate it.
Do you have any ideas on how to pull the image files from these 2 sites. I couldn't find the source location of the images. http://magzus.com/read/penthouse_let...pril_2017_usa/ This one has an option for "reading online" which opens up a frame and allows you to flip through the pages. This other site I was able to find the source but it doesn't appear to be in sequential order and the file name structure seems to change. I'm not sure how to handle this one either. http://openbook.hbgusa.com/openbook/9781455531356 |
April 5th, 2018, 05:09 AM | #8 | |
Blocked!
Join Date: Jan 2008
Location: HH
Posts: 1,963
Thanks: 115,040
Thanked 32,801 Times in 1,955 Posts
|
Quote:
On the second only the first couple of pages are available without buying. And what is visible are not images but text. This does not surprise me, sites usually know how to protect their stuff. |
|
May 30th, 2018, 03:03 AM | #9 | |
Moderator
Join Date: Jul 2007
Location: Upper left corner
Posts: 7,212
Thanks: 48,023
Thanked 83,517 Times in 7,206 Posts
|
Quote:
What you're going to want to do is to turn on the "Developer Tools" option (I'm using Firefox) and then take a look at the "GET" functions listed under "cached media" tab . . . you can see there the plain URL to the images. These tools in Firefox (there are similar ones in other browsers, they all work similarly) are incredibly powerful, they can let you watch as a particular webpage goes back to a server for graphics resources; since the aim of the designers is to make this hard to do, there are a lot of wrinkles. So that's what I'm seeing when I load that page, it may be hard to see, but I've selected the "Storage" tab-- its in blue because its selected-- so what I'm looking at are the "GET"s that this page is doing to store locally on my machine, which include the URLs of all the actual JPGs of pages in the magazine. Notice that I've got the URLs to images, like this Code:
http://image.issuu.com/170228092429-a44baae32e0c0ec0323085902a9faef1/jpg/page_17.jpg and notice that the pattern hXXp://image.issuu.com/170228092429-a44baae32e0c0ec0323085902a9faef1/jpg/page_ [somenumber.jpg] is repeated for all the pages, so you can use a CURL loop as halvar illustrated above and grab all the pages, by iterating the [somenumber] from 1 to the highest page number Copy and paste this into Terminal on a Mac (with CURL) Code:
for x in $(seq 1 148); do curl -o $x.jpg http://image.issuu.com/170228092429-a44baae32e0c0ec0323085902a9faef1/jpg/page_$x.jpg; done Last edited by deepsepia; May 30th, 2018 at 03:59 AM.. |
|
May 30th, 2018, 06:49 PM | #10 | |
Moderator
Join Date: Jul 2007
Location: Upper left corner
Posts: 7,212
Thanks: 48,023
Thanked 83,517 Times in 7,206 Posts
|
Quote:
So you get text that's coming in from a URL like: http://openbook.hbgusa.com/openbook/...apter001.xhtml . . . and you can use the same "fusking" trick that I used with the jpgs above, just plug it into halvar's CURL code, so that you iterate through the chapters, eg ..../chapter001.xhtml ..../chapter002.xhtml . . . and so on. You'll then have to some work to do if you want to format these the way they were in the original . . . you need to run these downloaded resources with the stylesheet they were using on the site, which is, I think http://openbook.hbgusa.com/openbook/...stylesheet.css . . . but I haven't checked that. In general these CSS pages have a lot of similar looking files, and it takes a bit of trial and error to identify which parts of the puzzle go where. But its kinda fun. It is _not_ blackbelt hacking by any means, not really “hacking” at all — all you’re doing is saving stuff that the site is pushing to your machine. but you can get a lot done just by poking around the guts of a website. There are lots of sites which disable right click, for example, you can pretty much always find the resource they're hiding in the GETs Same is true with some thumbnail gallery that something like Imagehost Grabber can't resolve-- you open the page and start looking through the Developer Tools Inspector to see just what gets called. Last edited by deepsepia; May 30th, 2018 at 08:49 PM.. |
|
|
|