What problem do node selectors solve in Kubernetes?
Node selectors let you constrain a pod so it only runs on particular nodes for example high resource nodes instead of being scheduled on any node by default.
In the lecture example what was the cluster layout in terms of node sizes?
The example used a three node cluster with two smaller low resource nodes and one larger high resource node.
Why did the data processing workload need to run on the larger node in the example?
The data processing workload could require extra CPU and memory and only the larger node had enough resources to safely handle those spikes without running out.
In the example with one large node and two small nodes what is the risk if you do not use node selectors for the heavy data processing pod?
The heavy data processing pod could be scheduled onto one of the smaller nodes which might not have enough resources leading to performance issues or resource exhaustion.
What is the default pod scheduling behavior in Kubernetes when no node selectors or affinity rules are used?
By default Kubernetes can schedule a pod on any node that has sufficient free resources without considering node size or role.
What is the basic idea of a node selector in Kubernetes?
A node selector is a simple constraint that tells the scheduler to place a pod only on nodes whose labels match a specified set of key equals value pairs.
Where in a pod manifest do you configure a node selector?
You configure a node selector in the pod spec section using the field named nodeSelector.
In the lecture example what nodeSelector key and value were used to target the high resource node?
The nodeSelector used key size with value large written as size colon large in YAML.
What do nodeSelector key and value pairs have to match against in the cluster?
They must match labels that are applied to nodes such as a node labeled size equals large.
How does the scheduler use node labels when a nodeSelector is specified on a pod?
The scheduler looks for nodes whose labels match all the key equals value pairs in the pods nodeSelector and only schedules the pod on those nodes.
How do you label a node in Kubernetes with kubectl?
Use the command kubectl label nodes node name key=value
In the example how would you label a node called node1 as a large node using kubectl?
You would run kubectl label nodes node1 size equals large.
After labeling node1 with size equals large and creating a pod with nodeSelector size equals large where will Kubernetes try to schedule that pod?
Kubernetes will schedule the pod on node1 because node1 is the node whose labels match size equals large.
What is a simple one sentence definition of a node selector?
A node selector is a set of exact label conditions that a node must satisfy for a pod to be scheduled there.
Why are node selectors considered a simple scheduling constraint mechanism?
They are considered simple because they only support exact key equals value matches and cannot express more complex logical conditions such as OR or NOT.
Give an example of a scheduling rule that cannot be expressed using only node selectors.
You cannot express rules such as run on nodes that are size equals large or size equals medium or run on any node that is not size equals small with basic node selectors.
Which Kubernetes features were introduced to handle more complex node placement rules than node selectors can express?
Node affinity and node anti affinity were introduced to support richer and more flexible scheduling rules.
When is it appropriate to use node selectors instead of node affinity?
Use node selectors when your requirements are simple and can be expressed as straightforward exact label matches such as only run on nodes labeled size equals large.
What must you do to your nodes before a pods nodeSelector can target them successfully?
You must first label the nodes with the key equals value pairs that the pods nodeSelector will reference.
How do labels and node selectors work together to control pod placement?
Nodes are given labels written as key equals value and pods specify nodeSelector constraints that require those labels so the scheduler places pods only on nodes whose labels satisfy the nodeSelector.