Answer:

Step-by-step explanation:
Standard equation of a line is y = mx + b, where m is the slope.
Given line y = - 3x + 78, slope, m₁ = -3
<em><u>To find the line perpendicular to the given line.</u></em>
The lines are perpendicular to each other if the product of their slopes = - 1
That is,

So the slope of new line is


Answer: 67.725feet²
Step-by-step explanation:
A heptagon consist of 7 sides and Its area is calculated using the formula
= 1/2 × nsr
n = number of sides = 7
s = side length = 4.3
r = apothem = 4.5
Area = 1/2 × nsr
= 1/2 × 7 × 4.3 × 4.5
= 0.5 × 7 × 4.3 × 4.5
= 67.725feet²
Answer:

Step-by-step explanation:
Given : 
Solution:



Thus , 
Thus the solution is irrational since square root 7 does not have a perfect square and equal to 6+2square root of 14
Answer: The average length of time that the 25 customers waited before leaving the bank. <u> e. Statistic</u>
The list of times for the 25 customers who left the bank. <u> f.Data </u>
All of the bank's customers <u> d. Population</u>
The 25 customers that the manager observed leave. <u> c. Sample</u>
The length of time a customer waits before leaving the bank. a. <u>Variable.</u>
The average length of time that all customers will wait before leaving the bank <u>a. Parameter</u>
Step-by-step explanation:
A data is a list of observations.
In statistics, a variable is an attribute that defines a person, place, thing, or thought.
A large group that have similar individuals as per the researcher's point of view is known as population, where its subset is known as sample.
The measure of certain characteristic in population is known as parameter, where for sample it is known as statistic.
The purpose of the tensor-on-tensor regression, which we examine, is to relate tensor responses to tensor covariates with a low Tucker rank parameter tensor/matrix without being aware of its intrinsic rank beforehand.
By examining the impact of rank over-parameterization, we suggest the Riemannian Gradient Descent (RGD) and Riemannian Gauss-Newton (RGN) methods to address the problem of unknown rank. By demonstrating that RGD and RGN, respectively, converge linearly and quadratically to a statistically optimal estimate in both rank correctly-parameterized and over-parameterized scenarios, we offer the first convergence guarantee for the generic tensor-on-tensor regression. According to our theory, Riemannian optimization techniques automatically adjust to over-parameterization without requiring implementation changes.
Learn more about tensor-on-tensor here
brainly.com/question/16382372
#SPJ4