Posts

Showing posts from April, 2015

Hyper-V creating new machine error

if you're getting any of these,

"might not have permission to perform this task"
"server encountered an error while configuring hard disk you might not have permissions"
"Contact the administrator of the authorization policy for the computer "
"The server encountered an error trying to create the virtual hard disk"

or any error at all, always 1st check that this service is on:
"Hyper-V Image Management Service".

well, and all its other services

Variations not copying items or Ribbon Button "update all variants" is grayed out

my scenario is when everything working properly and yet you never see your new items/pages in your variations targets.

this is actually an embarrassing thread, but i'll write it for the key searches I've made without any results.

the reason might be you just misread, or never read, the variations docs to the end, well i guess its there since i never really read them either.

anyway if you set the target variation to be updated automatically it will only take Published items, so make sure you publish you content where needed.

but if you set it to be manually updated then the Ribbon Button "update all variants" is initially grayed out.

you must select items and only then push it.
its a bit tricky since:
  a. if you want to copy entire list/lib you need to select them all, sometimes its a lot.
  b. if you select a sub-folder it doesn't work.

the solution to the above will be to create a designated view (flat view - no sub-folders).

p.s. from a certain amount of items…

Serialize (map) TermsSet / Term

i should edit this sometime, but currently here is the code

publicclassComplexMappedTerm
{
publicstaticint LCID { get; set; }

publicGuid ID { get; set; }
publicstring Label { get; set; }
publicint ChildrenCount { get; set; }
publicstring Path {

the Deep Web - the technical truth

if you try to google "deep web" you'll get tons of results, most, unfortunately interesting conspiracies and "bad" stuff.


What's in it?

but the truth is that the 1st main portion of the deep web is just a lot of data that is just out there, that anyone can get there, and google just cant index it for 1 of 3 main reasons -  1 - its API format data - i.e. JSON, XML, numbers ect. 2 - its form query based data, meaning its in a database and you need to send a query to get results. 3 - its logon based (facebook, ect.)
the crawlers knows how to handle web pages, today even with JS and AJAX, but still only web pages.
there are of course more reasons, for example google, and when i say google i mean search engines, will not index illegal stuff, heavy gore, copy-rights (more relevant on youtube), and commercial reasons. 
so ye, most of the deep web is just data, yet for specific people, such as NASA's API's, and here is a list of 60 deep sites with API's …