Monday, September 18, 2017

Angular 4 Material - beginner tutorial / starter notes

i have just tried the Angular 4 Material, following their get started guide and just tried the DatePicker.

but things didn't go as well as expected, until i jumped to the plnk to see the example.

if you'll notice the main.ts it has a long long list of imports

i just tried to add all this long list to my app, and it finally worked.

technically if you'll import just the MdDatepickerModule as stated in the docs something will happen, even the <md-form-field> didn't worked. but eventually you want it initially to look just like in the docs.

so my 1st advice - import all the modules. when you get to the point where you want to change or use specific things, start cutting the modules.

2nd advice is to make a separate module with all the material modules. but how to do it right?
well, 1st don't put the new file in the app/src folder or you will get this error.

2nd, you just copy that long long list, and paste it 3 times, your file should look like this:

import { NgModule } from '@angular/core';

import {
//long long list
//goes on and on
} from '@angular/material';

imports: [
//same long long list
exports: [
//same long long list
export class MaterialArmadaModule { }

and then in your app.module.ts, and i'll aslo state here the LOCALE_ID that you should always use to solve all the locale stuff

import {MaterialArmadaModule} from './Modules/material.module';
import { LOCALE_ID } from '@angular/core';
declarations: [
imports: [
//MUST after browser and animation modules
providers: [
{ provide: LOCALE_ID, useValue: 'he-IL' }//your locale


Sunday, September 17, 2017

Taking Angular 4 as Stand Alone with Asp.Net Web Forms or MVC

once you do in the CLI ng build you will have a new folder named dist containing the rendered solution, in our case our nice Products App.

I created a new ng project named MSProductsAppAspNet in order to put it inside an Asp.Net project.

IISHander (ashx)

so to make things easy, i started with a handler to do the same thing we did with the web api, merging the Products[] array init lines as a static var inside Product class.

public class Product {
  public int Id { get; set; }
  public string Name { get; set; }
  public string Category { get; set; }
  public decimal Price { get; set; }
  public static Product[] products = new Product[] {   
    new Product { Id = 1, Name = "Tomato Soup", Category = "Groceries", Price = 1 },
    new Product { Id = 2, Name = "Yo-yo", Category = "Toys", Price = 3.75M },
    new Product { Id = 3, Name = "Hammer", Category = "Hardware", Price = 16.99M }

and the handler looks like this

public void ProcessRequest(HttpContext context) {
  System.Web.Script.Serialization.JavaScriptSerializer JsonSerializer =
    new System.Web.Script.Serialization.JavaScriptSerializer();

  context.Response.ContentType = "application/json";

  string a = context.Request["a"];

  switch (a) {
    case "GetAllProducts":
    case "FindProductById":
      string strId = context.Request["id"];
      int id = int.Parse(strId);
      Product.products.FirstOrDefault((p) => p.Id == id)));

and in our ng service, ProductRestApiService, I just changed these 3 lines

private url = '/ProductsHandler.ashx?a='; 
this.http.get(this.url + "GetAllProducts")
this.http.get(this.url + "FindProductById&id=" + id)

each in its respective place

I then ng build the project, copied the dist folder to a folder in the project I called ng4 and the only thing I needed to change in the Index.html was the <script src references and add /ng4/ before each, and the results where perfect.

inside aspx page

moving <app-root></app-root> and the <script> tags to the .aspx page, inside the form tag with the server attribute, tried to postback when I clicked search. so I changed find() in FindProductByIdComponent to return false. everything works great.

ok, so now lets build 2 different components in order to use then in say 2 part of the same page or 2 different pages.

of course we could just make 2 Angular 4 projects and have 10 js files, but we don't want that.

well, lets start by stating that if you try to use any component outside <app-root></app-root> you get an error.

inside an Asp.Net website

so I am now throwing ideas....

1st is to just create a long list of our components in 1 project and decide who to call as an attribute on <app-root></app-root> like <app-root component="clock"></app-root>

 then inside app.component.ts import and use ElementRef like this

import { Component, ElementRef } from '@angular/core';
export class AppComponent {
constructor(elementRef:ElementRef) {
    this.component = elementRef.nativeElement.getAttribute('component');

and in the app.component.html just put all of the components with
*ngIf="component == 'Clock'"

and it will work as long as each <app-root component="clock"></app-root> call with a different component is in a different page, since angular is just looking for its 1st root element.

but if you want several components in the same page... well thats another post to come
but you can start looking at some resources


MVC is either working with a Hander, or ApiController, or even standard Controllers, or just throwing your data on the page and launching angular to take it as is, anyway nothing we haven't learned.

for the entire serie
"Angular 4 with .NET (webForms/MVC) Tutorial - Table Of Content"

Working with Angular 4 and .Net the right way - WebApi

As already stated, Angular is Framework for Singe Page Applications. so its NOT for Websites that wants a good enhancement for their pages. but.... we love Angular so we do want to learn how to do that, but lets start at the beginings

1st lets just create a simple SPA app with WebApi(v2), and see how the thing works.

as stated, this tutorial is not to learn how to code, so i'm using the Basic MSDocs example for webApi as is, just copy paste and make sure its working, and DO try to understand. in the end its just a RestfulAPI.

*MAKE SURE you allow CORS for the webApi since we're doing all this at localhost, so add to the web.config (msdn)

so we now have these 2 rest urls (you might have a different localhost port like localhost:493820
http://localhost:57267/api/products/1 => [id]

great, now we will create an SPA app with 2 tabs, 1 for all and 1 for search.

if you never coded in Angular 4, see this coursetro nice tutorial

start by creating a new app, like we learned here. or if you got it from angular cli home site.
i'll call mine MSProductApp. so CLI command is
ng new MSProductApp

now i'll open it with Visual Studio Code [open folder], just since it has more colors differences for JS, for .Net apps i use VS 2017.

i'll start by adding 2 components and 1 service
ng generate component AllProducts
ng generate component FindProductById
ng generate service ProductsRestApi

my results

notice that the generator changed upper-cases to dash-lower-case. this is because in HTML elements may be lower-case only, so these are fixed automatically made in Angular, and i did it on purpose for you to learn. also notice the selectors:

anyway, lets start by making a semi-tabs functionality, and this is for tut-purpose only, you should use thing like Angular Material tabs components or something for real apps.

*NOTICE, i am using picture since i DONT teach to code here, use and read the tutorials i mention, but if you're in it already, here is a .git in the end with all my code.
Bresleveloper's Angular4WebApiTutorial git

so coding the app-root and the styles.css with simple *ngIf to set tabs, sry for being so robust, i just want to get to the point quick

for just understanding how to code services see the services chapter is coursetro, and i definitely recommend to understand angular 4 http, what i did, promise is the next one, and of course angular docs.

p.s., the right way is with types and modules ect. but i just want to quickly get to the point.

the right way in angular when using a service to load static data is to let the service load everything and let the component just use the data strait from the service, so planning the AllProducts component to just use our service will look like that (html, css, ts [type script, instead of js]), the files here are

*Notice the double dot "'../product..." for our service, as it is 1 directory up, unlike with the app.module.ts which has only 1 since its in the same directory.
now, in order for the app to know our service we must import it in our app.module.ts

so now for the hard part, configuring a service to consume REST.

why do i call it the hard part? since it takes a few steps just to get the Http provider running.
1st is to import HttpModule AND Http, Respone... in app.module.ts file, and aslo add the HttpModule  in the NgModule imports section

then you again need to import Http and friends in your component .ts plus rxjs for using promises

anyway, this is the result, a simple call the our rest as the service starts working,

and if you build all this with me, you should see

now to our search page, find-product-by-id.component.[html, css, ts], just don't forget to import the service

and the function in the service

and the result

there you go!
we've created an Angular 4 SPA that consumes a simple .Net WebApi.

again the git Bresleveloper's Angular4WebApiTutorial git

next we'll see how to take this app out into a simple .Net WebForms project or MVC.
 Taking Angular 4 as Stand Alone with Asp.Net Web Forms and MCV

for the entire serie
"Angular 4 with .NET (webForms/MVC) Tutorial - Table Of Content"

Friday, September 15, 2017

Angular 4 - Understanding Angular 4 Basics and Files

so lets start fast by learning to create an Angular 4 project, from the eyes of a .Net developer.

1st pre-requisite is Node JS, download from here.

Windows CLI

press Windows+R, it will open the "RUN" command line and write "cmd" and press Enter.

the windows CLI, "Command Line Interface" will open.

go to a folder where you want to put all the project's file, for example for me under documents, the commands will be: [note that pressing "tab" will give best result like "cd d" will complete to "cd documents"]
"cd documents" - the CLI starts at your user folder, so this it to go into a child folder
"md "Angular 4"" - you can make folders via CLI
"cd "Angular 4"" -  go into that folder

Angular CLI

so now you will download and create an entire angular 4 project, with Node, and build it and test it in your browser, what used to be VS -> new project -> web X -> F5.

all this you can see in the Angular CLI site, or the Angular site Docs

i also recommend this nice tutorial from courstro

P.S. - some of these commands can take several minutes.

"node -v" -  just make sure you have node installed
"npm install -g @angular/cli" - will install angular in your computer
"ng new my-dream-app" -  will create a folder "my-dream-app" with a project with angular 4, node, testing, build tools, git, rendering, everything you need.
"cd my-dream-app" -  go into that folder
"ng serve" -  will create a node socket to test your ng code, just browse to "http://localhost:4200"
in the cmd press twice with time between Ctrl+c to stop the service.
"ng build" -  build the code to few final HTML and JS files

Quick Overview of the results

this tutorial is not to teach you how to code, but how to work with Angular 4 and understand whats going on, as most people can code but not know whats what.

in the "my-dream-app" folder most of the filed are configuration file, for TypeScript, for building the app, for testing (protactor and jasmine) and node.

the folders:
".git" - hidden folder, preparing your repo, you can eventually deploy all for free to github and angular git pages, learn how to in this coursetro
"dist" - final HTML and JS files
"git" - e2e - testing stuff
"node_modules" - instead of IIS Express that acts as a server for your site, now node does it.
"src" - the files YOU write.

i want to focus src and dist, as the rest are just there to give you a nice eco-system to build and deploy and fast dev as explained in the prev post.

1st there are all the files under "src", most are just TypeScript configs, and there are the simple "Index.html" and "Styles.css", which are the basics and starting points for every web project. "Styles.css" is empty, and "Index.html" has just "<app-root></app-root>" tag for the root Directive / Component. <-- this is just a teaser, go learn the differences.

"environment" for differences for prod and dev builds.
"assets" as is, images and stuff, you might need to remind git to add them ("git add src/assets").

"app" for YOUR CODE. you will do commands like "ng generate component my-dream-component" to create new components or services and it will put all the new files here.

"dist" will appear after a build there will be the final HTML and JS files.

lets check them out.
"index.html" - same but with JS sources of the followings

JS's each with his map file

"inline.bundle.js" - has the This is a webpack loader

"main.bundle.js" - has YOUR CODE in it, WebPacked

"polyfills.bundle.js" - let me quote whats inside "This file includes polyfills needed by Angular and is loaded before the app"

"styles.bundle.js" - has the content of the .css files. NOT the X.component.css, those go to main.bundle.js

"bundle.bundle.js" - this replaces the "angular.js" file

all those files are the sums of many other files, the bundling thing you can get form this image from coryrylan blog, it takes all the dependencies and build them to 1 big file, that is called bundle and is done with webpack

in the next post we will make a most basic component and see how to use it with .Net, starting with WebApi. Working with Angular 4 and .Net the right way - WebApi

for the entire serie
"Angular 4 with .NET (webForms/MVC) Tutorial - Table Of Content"

Angular 4 with .NET (webForms/MVC) Tutorial - Table Of Content

After quite some time, the time for me to start playing with Angular4 has come, and as a .NET developer, used to integrate AngularJS (angular1) with my WebForms / MVC / SharePoint solutions, suddenly there is a CLI, new unknown files, and questions jumps:

- Where is the AngularJS file?
- Where is MY code?
- How do i integrate Angular4 with my .Net solution, MVC or WebForm?
- Why do i need all this CLI and NodeJS?

well, i would like to answer all of that, after answering that to my self, and share that with you.

so i'll just build some posts

1. Angular 4 - History, Why(s), Angular 4 Basic Design.
2. Angular 4 - Understanding Angular 4 Basics and Files.
3. Working with Angular 4 and .Net the right way - WebApi
4. Taking Angular 4 as Stand Alone with Asp.Net Web Forms and MCV

Hope you Enjoy!

Angular 4 - History, Why(s), Angular 4 Basic Design

Brief History

Stone Age

once upon a time a nice company developed a browser named NetScape. all you youngsters that doesn't know what that is, go read that Wiki. anyway they changed the web by inventing JavaScript.

with time all new browsers adopted the new scripting language and each implemented it a bit different. you needed to be a real expert to do something better than an alert() or confirm(). the best you had was simple forms. i think we can date this up to the early 90's.

jQuery Age

then came some standards about JavaScript and CSS, and Java and Microsoft and more developed Web Platforms, which could do cool things in the server, but the client side was the land of jQuery.

jQuery was (and still is) an outstanding project for Cross Browser Development upon the Client Side, making is simple to do the same thing with all browsers, including DOM manipulations, Creation, CSS, and even Animations. being a web dev meant being a jQuery man and the term Plugin was actually jQuery plugin.

there were some other growing Frameworks like KnockoutJS and BackBoneJS, you can see most of them in this awesome TODO's Tutorial with EACH framework called TodoMVC, and they sometime remove old or non-popular ones.

Angular JS Age

then, a bit before 2010, browsers became strong and more elegant, complex, heavy-duty stuff could be done so people moved from Plugins to Frameworks. Framework MEANS you do EVERYTHING in and by and with the framework, according to its way. and that's very important note.

so many frameworks came, and the real war in the beginning was Angular.js and Ember.js. then came React.js. but why? (read some in sitepoint for example)

well, just like you want to move from Assembly to C to C++ to C#/Java, its kinda the same. people, adopted by Corps, wanted more power in the JS field, so the creating great things, each with a bit different philosophy, some wants to help you push in some components with more power, like React, and some want to take over with a lot of steroids, like Angular.

so Angular survived all the way, made some changes, and got the idea of its real power - take over the entire Client / SPA dev eco-system its own way, and make it the best.

its important to understand that, since many people complain about the Learning Curve, and how hard it is to make simple things with Angular, and the are right, if you want simple help with your website you should use jQuery. for a complex component, but still just 1 or few but inside your website, use React. for a complex SPA like GMAIL or heavy TDD use Angular.

i must mention TypeScript here, and the link is to the Docs intro, that changed everything too by making JS as an OOP language, or at least more Typed, as many, many of the followings is based on that.

Age of JavaScript 6, HTML 5, CSS 3, and Components.

so with all that going on, and everybody struggling with the same burdens with every front, web or SPA, the W3C, which is in charge on how our web is built like the API's the browsers should offer, decided to change everything.

HTML 5 now gives you elements you used to need plugin and flash.
CSS 3 now offers Animations and Responsive tools.
JavaScript, ECMAScript 6, now offers most of what you used to need jQuery for.

and some new concepts are around now, born by Angular JS and its friends long time wars like

WebPack - you have 20 little JS files for 20 components each with html and css? lets make it all 1 thing.

Shadow DOM - take a piece of HTML, CSS and JS and hide it in a little box that is a new HTML Element, like input type "range".

Web Components - just like in server side programming you have a DLL for each thing, or a library or user control or whatever, in the web when a certain part must do its own job independently its technically called a component, doesn't matter if its on the whole page or a little clock.

with all that, a web dev can now do everything he used to do with jQuery or basic angular or react with pure JS, CSS, and HTML. so no more need for those. so whats now?

New Age, with NodeJS and CLI, Cloud and git

so now with no more solutions that were made to solve problems, like jQuery and AngularJS, and even React, each "item" moved on to better implementing each its own philosophy, with the help of NodeJS.

NodeJS is another thing that changed the web. not because its a server platform built on JavaScript, but because its light enough to be used as a help tool, not noticed by the main framework. thus when developing with Angular 4 and others you have a dev enviorment based on Node.

also, Windows adopted the CLI, meaning you can do many things new with the Command Line Interface, your CMD, that thing that looks like DOS. so for example, creating a new project in VS, of template MVC5 just write "create mvc5", and boom, no more screen with some choices ect., and adding a folder with all that 3rd party nuget - again, just "npm bla bla". Linux people are happy, but its still hard to install anything on linux, but for devs, that makes a lot of possibilities, like creating and entire angular4 project with all needed in 1 line, or adding a component or anything in 1 line, what used to take a lot of coding.

Could made a lot of space for us to deploy our things, via git, so that's another thing, you can now easily put your code in the cloud, build it and run it, .Net or JS.

so what do they what and how they do it?

jQuery still wants to offer you even more simple life.
TypeScript want to change JavaScript again, and they are doing it well and being adopted.
React wants to help you do components more easily.
Angular want to be you Steroids Framework.

so for just some quirks go pure JS CSS.
for extras and plugins jQuery.
for anything that is NOT big SPA, or if you don't need or want a Framework limitation, use React.
if you're going SPA, big thing-big framework, or really Complex, here comes Angular 4

Angular 4

i must admit i didn't use Angular 2, yet used Angular JS from 1.0.8. i got the point where it wants to be the overseer and i liked the way it did it. 2 was an experiment with the new ongoing stuff so i just waited for 4.

so since angular wants to rule them all, they went this way. so no more a simple JS file with the framework, but you create a big projects that allows you to manage everything and then "compile" it to (5) js file(s).

also project is based on Node so when you save immediately you see all changed in the browser.

CLI for not being depended on an IDE, commands to create items, build them, test, deploy ect.

WebPack so you can still write a lot of small components, directives, css files, ect. ect. and it will render evreything into 5 files, 1 for the framework dependency, and 1 for your code. the others in the next post.

and the made it all prepared for git deployment :)

so how do you use and tame it? lets find out in our next post "Angular 4 - Understanding Angular 4 Basics and Files".

for the entire serie
"Angular 4 with .NET (webForms/MVC) Tutorial - Table Of Content"

Tuesday, September 5, 2017

Setting up (Kali) Linux on Windows 10 Pro Hyper-v

well, since that forces me to travel along blogs and stuff... lets write everything down. this can be basically the same for any Debian, and i guess even Ubuntu.

if i'll write a detailed post it will take forever for something that someone that creates a VM should know, so lets write what is needed.

1. Enabling Hyper-V

from MS DOCS about enabling hyper-v open Windows Powershell with "Run as Administrator" and copy paste this command 

   Enable-WindowsOptionalFeature -Online -FeatureName:Microsoft-Hyper-V -All

2. Internet

you have 2 ways to go. External, or Internal/Private.

External means you create a virtual switch that just connects to your computer's internet and is open to the world. don't forget to get an Anti Virus.

Internal/Private means a virtual switch that is not connected to your computer's internet. if you do anything in prod or company or secure, choose this, and go to your default Ethernet connection and share (Properties -> Sharing) it with the Internal, or private for nothing.

for home use, just do an external. and if it doesn't work, it means you just chose the wrong network.

you can check this from Control Panel -> Network and Sharing Center -> Change Adapter Settings, there you should see your vSwitch, and it should not have a red "X" on it. if it does, change the "Connection type" on the switch manager until one works.

3. creating Virtual Machine

go to MS DOCS about creating a new VM for a simpe tutorial, while for Kali Linux, you'll have to choose an .iso yourself, so just download from the Kali Site. use the latest 64bit.

when you will try to run the machine, IF you choose Generation 2, you'll get an error. fix it by Disabling the Secure Boot from the VM security settings, as explained here.

4. Kali settings

you might not have a repository at all, meaning that all you "apt" commands just wont work.

in the Kali web you have the right repository, and your easiest way to open the right file is in the terminal write

   leafpad /etc/apt/sources.list

5. RDP

so following kali forum on how to install xrdp, i'll take the minimum steps needed.

we start by updates and upgrades

   sudo apt-get update -y && apt-get upgrade  -y
   sudo apt-get dist-upgrade -y

i got a window asking to enable a feature that is a security risk, mainly since you run things as root. since its a home machine, i don't mind. for a server of prod machine, you should create a user, and generally never use root.   P.S. all that might take some time

and then the xrdp 

   sudo apt-get install xrdp lxde-core lxde tigervnc-standalone-server -y

now some config editing, start with the xrdp.ini

   sudo leafpad /etc/xrdp/xrdp.ini

and find autorun= and change int to autorun=Xvnc.

now another file

   sudo leafpad /etc/X11/Xwrapper.config

changing the allowed_users=console to allowed_users=anybody (enabling root privileges with the rdp). more info in systutorials.

starting the services to test the connection

   sudo service xrdp start
   sudo service xrdp-sesman start

try to connect, choosing Xnvc. write ifconfig in the terminal to get your machine's IP.

if you make it, then make them to start on startup

   sudo update-rc.d xrdp enable
   sudo systemctl enable xrdp-sesman.service

NOTICE that when you close and open your RDP you will NOT return to you session
couldn't find a solution yet, as all the port games didn't succeed. also it doesnt work if you have the Hyper-V session open.

6. RDP Advanced

rdp shortcut - create a new text file and change it to <whatever>.rpd and right click and "Edit" and set your details, i.e. your machine's IP.

i dont want to choose Xvnc eveytime again.
so in the xrdp.ini just move the [Xvnc]to the top. restart and try.

skipping the username and password : in the [Xvnc] part change the username and password values from ask to root and your password. restart and try. just need to press OK. NEVER UNLESS IN YOUR COMPUTER. i couldn't find a way to remove the log screen, i guess you need a linux client.

NOTICE that from RDP you cant get to "service" in the terminal unless you use sudo.

and make it a bit faster.

7. Google Chrome

ha, everything in linux is hard...

follow this link to install chrome, remember to SAVE the file, and from that directory right click to open terminal before the install command.

Hyper-V Kali Linux Generation 2

since latest Kali Linux (2017 series) is based on debian 8 or 9, there is support for Generation 2 in the Hyper-V, with UEFI boot, all you need to do is to disable the Secure Boot

1. open Hyper-V Maneger
2. right click on the VM
3. Settings
4. Secuity
5. uncheck "Enable Secure Boot"
6. OK
7. run your VM.

all in MS DOCS. there is a PS there also.

Monday, September 4, 2017

Windows, Unix, Linux, and all between

Windows vs Linux? Windows vs Unix? Linux vs Unix? wait isn't that the same thing??

lets break it Up-Down.


briefing from this answer at stackexchange:

in the beginning, somewhere before the 50's, computers could do 1 thing, just like a car that can only drive. you have a wheel, but you cant tell it to pick up something. computers where designed to do 1 specific task and that's it.

then in the 50's OS (Operating System) started to apper, but you needed to tell them exactly what to do everytime you wanted to do something. take a look at the list with timeline.

there where games already. ye games, like chess and Space Travel. and some guys at AT&T Bell Labs decided to build a better simpler OS to run them. and thus Unix was born.

some big boys bought some rights, its was mainly for heavy Enterprise, gov, academy.

at a point they made standards for something to be called UNIX, and that lives till today, with 3 variations, POSIX, Single UNIX Specification (SUS), and Open Group Base. example:

at the 70's-80's there where few major corps selling unix that where derived a bit from each other, and, mainly, made A LOT of money. and were all closed source execpt 1 (free BSD). thats for all the open souce free software people to remember for a second.

in the 80's 2 things where born after IBM's PC (Personal Computer) and DOS (Disk Operating System), Microsoft and the GNU project.

Microsoft saw the opportunity in IBM's PC, since UNIX was designed for computer engineers to use, it was hard to use. so MS bought some basic DOS, applied it to IBM's PC, and an empire was born.

meanwhile the GNU project, which, like other open source project has names that doesn't meets logical expectation but just fun ones, its stands for "GNU Not Unix", decided to build a free OS. but they meant freedom of use not free of price. technically the contributors make it also free until also the FSF was born.

lets make a stop here

lets understand 2 very important things here that will affect our future. when UNIX was designed, they thought about servers, heavy duty, computers that were meant to do great thing. so they designed it in and academic-industry level. and that's how it was and still is sold, very expensive, with its own dedicated hardware, itself dedicated and altered code to the mission.

MS just gave something simple ans low-cost to a simple and low cost hardware. the design was based on simplicity, single computer, single user, so internet, not nothing. a text editor and a game were to run on it.


the GNU project had a problem, they never made it with completing a kernel (today technically they finished it but...).

and a bit before the 90's Andrew S. Tanenbaum designed and wrote the MicroKernel.

Kernel ? OS ?

ye, just a sec. whats an OS? and whats a Kernel?

and OS is this:
taken from wiki about OS. there is the Hard Disk, CPU, RAM, Graphics card, Keyboard, Mouse, all that? they cant talk to you, Mr. User. ye you click on the mouse, but how the computer knows the mouse is here and a button was clicked?

applications try to do things according to what you click on the mouse. but just imagine that every each game you buy you need to pay them for a software that activates the screen, mouse, sound, ect. that the job of the OS. an OS knows to "wake" the computer to life and do all the part of "talking" to the hardware, and also gives platform to other apps, like it renders images, compiles and run code, render text, ect., with sometimes help of drivers.

the part of "talking" with the hardware is the Kernel (from wiki).
now that you understand better what is an OS, and why the Kernel is such a major part of it, you can understand why single-user monolithic (non-modular) design will be totally different than a multi-user modular design, even if you see on the screen the same thing. just like riding a bike vs a jet.

soon the GUI (windows) will take a part too.

so until here we have UNIX and DOS. a shared feature for them is the terminal. while with UNIX its by design, for DOS is by limit. but after some time MS finally made it with Windows, a DOS that has a GUI, and that's an important note.

while these 2 are developing, DOS companies, MS as the head of the pyramid, and UNIX corps, a guy name Linus Torvalds built a kernel, that worked perfectly, and gave it free of charge, and called it Linux. that's why everybody refers the GNU OS as Linux.

Linux birth - professor Andrew Tanenbaum wrote a kernel designed differently, and he called it MicroKernel. so Torvalds didn't like it and wrote his own and used GNU for the rest. but they all, these 2 and the GNU project, got their idea's form the UNIX design, so its all called Unix-like.

since then, since its and open project and free, people did many many changes and developments and there are so many distributions of Linux, starting from some high priced stable servers, to some high end tech, with lately some many user friendly.

basically if you want to start use Ubuntu or Mint.

take a look at the history part ending here with this image


so its easier to talk about the differences between UNIX and Linux, since GNU\Linux is very influenced from the UNIX project and spec, and while never trying to match the spec to be called a "real" Unix (and maybe the can), the took the principles and make something new, hopefully better, and free, free of charge, and with the freedom of use. you control the OS, not the OS controlling you.

the UNIX design, as simple as i can put is after reading a great article in the register, plus some extras, is a modular design with a matrix of who does (multi-user) it and where its done (multi-task). plus add the fact that everything is done via a terminal, SHELL and not a GUI.

since i found no image for that matrix, i made one so sorry for the results.
don't get petty with me, this is no a low level post. the point is that every app run in its own zone, and every user has its own right, so if the kernel is a user, and you are a user, and you run 2 apps, then app1 and app2 are running with your privileges, so they can't hack or sabotage the kernel, and if 1 ask image rendering and the other asks text rendering, or even if both asks the same, each is given a different context so they can't touch each other, nor damage ect.

so you basically cant ruin anything.

add the fact that everything runs from a shell, like a command line. you cant ruin anything. and almost cant get hacked.

add the fact that servers are make to be installed and administrate from another station, with no GUI at all, just connect via terminal. you cant ruin anything. and almost cant get hacked.

so very stable, very secure, very strong, easy handle multi tasking.
very NOT user friendly.... and that's a big thing whether you like it or not.

now UNIX vs GNU/Linux OS ... well since Unix is trademark and everything they couldn't use the same code so they wrote everything again just the same. there are some orientation differences, like Unix is more for a server with less filesystems support ect. while Linux is more for PC, ect. so design wise, at least for the early years, there are no major differences.

if you REALLY want the details read in IBM.


so to wrap everything up to here, Unix, as you got it, is a commercial thing, most up-to-date versions are closed source and cost a lot of money, and developed by corps. some open sources exists, mostly for devs, and lately with some GUI.

its a high end OS for servers and heavy duty, and with the new Linux servers today its falling in numbers...

you can get a picture of the distributions of Unix from this image from wiki:

but its still the "father" of Linux and DOS.

BTW Mac-OS is originally a Unix, and today some still call it a Unix, some call it Unix-like.


after stating that design wise Linux is a Unix remade for the private user, for a PC, i must cover windows before wrapping things with Linux since things changed.

MS-DOS, and every kind of DOS there was those days, tried to be Unix-like without the power of Unix. an OS, a terminal, that can do stuff, but single user and single task.

you can read in quora the technical differences between DOS and Unix, but to sum it up, the singularity makes it weak, non-secure, and if something goes wrong everything goes wrong.

but its simple. simple enough that they could ship it with EVERY IBM PC and ... created a monopoly.

Window up to 98/ME where actually DOS based. then they changed it to NT, and Windows 10 suppose to be something completely new, trying to catch up.

DOS was a great success, although it was so poor, because it was so SIMPLE. and low cost. anyone could use it. and those were days without internet, servers in every house and company, so also without all the common threats. so no-one needed more then that.

with Windows 3.11, the GUI, thing became better and and lately worse. the better, even every kid and old grandma can use windows, they took the market. everybody knows that the GUI idea they took from apple with their MacPaint

but that's all over now.

but the problems arrived. you want it simple? you cant do complicated with it. i'm talking now about all versions of windows, since these things came with time as windows became more and more the main OS in the world, server and PC.

1. singularity, single-user, single-task, and using same parts for everything, starting with using the kernel for single tasks just since its one piece with the OS, and up to having Internet Explorer as a service for Outlook, Help, and Browsing. everything is used everywhere like a web.

so whenever something breaks, it breaks everything, and you don't know what is it.

2. the single-user, single-task and also causes, in the time of internet, that taking control of 1 part mean you have complete control. easy to hack. also almost impossible to create a server.

3. GUI, not terminal, makes it easier since the hacker needs to just tell the computer to do ENTER instead of letters. that is why one of the greatest windows security features is the "Run As Administrator". you just need more clicks, and a right click at the right spot with the mouse is hard. no more ENTER horror.

but all that makes the "just do next next next" available, and that is where windows win. big time.

4. RPC model. as explained detailed in the register it means that every program in windows is always listening for incoming commands. again big doors for hackers.

5. no shell. you can do things only the windows way. the OS come as is no flexability. all that is generally just annoying most of the time.

Microsoft will be forever attacked for windows's singularity. for its weaknesses. and jealous for its success. a simple example for the "other side", the windows supporters is this makeuseof post.

its the "worse" OS, although not bad at all, exactly what a simple user needs, and is evolving. has support for games, programs, dev, servers, they do everything, even the new PowerShell gives you all the ability of .NET, have most support software-wise, hardware-wise, truly a simple user, i.e. not a Dev or IT can't afford using something else. not even Linux.


so as we said, as an OS is a Unix, or to be exact a Unix-like. so it has all the advantages of Unix, security, stability.

but also has its downsides, complex, and therefor cant be used for simple user. so much less games, programs, ect. and ever less servers since people got used to Windows, so that's the GUI they know and want to use. for years Linux was just a free Unix, for academy, Devs and IT that were more dedicated.

having a GUI in Linux is not such a downside, up to lately, since the GUI was only for exploring what you have, but real things like adding users or security stuff needs terminal. although that's changing too. although for real security do create a GUI-less server and connect via terminal from another station.

but linux also has the middle part that Unix and Windows don't. the community. and with the rising of the web, where everybody can take a part (thank to windows that made is possible for everyone to own a computer that they can operate as a kid/student) the community helped building up things, so still as time passes we see more drivers, softwares, hardware comparability, games, ect., and a better and better GUI.

while years ago Linux was a thing for devs, IT, hackers, ect., with new simpler GUI's around, more user friendly distributions, and the focus to enable it for the simple user its getting a very good swing, especially in the servers zone, where its just much cheaper.

its still doesn't come near Windows, still a bit complex, too much Distros, and while it takes big time in stability, security, and price, Windows still takes it big time with support and ease of use, which are the main criteria for the simple user.


with the drive of open source and community, even Microsoft makes more and more things open source, and free.

Linux come more and more towards the understanding that if you want a real piece of the market, you can't stay with the stability-security but must help the simple user, even in the server part.

if you know your way, and want to save the 100$, use Linux. I chose Windows 10 Pro to learn it with Hyper-V.

Linux is coming, faster and faster, and its great, lets learn it.