| coalesce {SparkR} | R Documentation |
Returns a new SparkDataFrame that has exactly numPartitions partitions.
This operation results in a narrow dependency, e.g. if you go from 1000 partitions to 100
partitions, there will not be a shuffle, instead each of the 100 new partitions will claim 10 of
the current partitions. If a larger number of partitions is requested, it will stay at the
current number of partitions.
Returns the first column that is not NA, or NA if all inputs are.
## S4 method for signature 'SparkDataFrame' coalesce(x, numPartitions) ## S4 method for signature 'Column' coalesce(x, ...) coalesce(x, ...)
x |
a Column or a SparkDataFrame. |
numPartitions |
the number of partitions to use. |
... |
additional argument(s). If |
However, if you're doing a drastic coalesce on a SparkDataFrame, e.g. to numPartitions = 1,
this may result in your computation taking place on fewer nodes than
you like (e.g. one node in the case of numPartitions = 1). To avoid this,
call repartition. This will add a shuffle step, but means the
current upstream partitions will be executed in parallel (per whatever
the current partitioning is).
coalesce(SparkDataFrame) since 2.1.1
coalesce(Column) since 2.1.1
Other SparkDataFrame functions: $,
$,SparkDataFrame-method, $<-,
$<-,SparkDataFrame-method,
select, select,
select,SparkDataFrame,Column-method,
select,SparkDataFrame,character-method,
select,SparkDataFrame,list-method;
SparkDataFrame-class; [,
[,SparkDataFrame-method, [[,
[[,SparkDataFrame,numericOrcharacter-method,
[[<-,
[[<-,SparkDataFrame,numericOrcharacter-method,
subset, subset,
subset,SparkDataFrame-method;
agg, agg, agg,
agg,GroupedData-method,
agg,SparkDataFrame-method,
summarize, summarize,
summarize,
summarize,GroupedData-method,
summarize,SparkDataFrame-method;
arrange, arrange,
arrange,
arrange,SparkDataFrame,Column-method,
arrange,SparkDataFrame,character-method,
orderBy,SparkDataFrame,characterOrColumn-method;
as.data.frame,
as.data.frame,SparkDataFrame-method;
attach,
attach,SparkDataFrame-method;
cache, cache,
cache,SparkDataFrame-method;
checkpoint, checkpoint,
checkpoint,SparkDataFrame-method;
collect, collect,
collect,SparkDataFrame-method;
colnames, colnames,
colnames,SparkDataFrame-method,
colnames<-, colnames<-,
colnames<-,SparkDataFrame-method,
columns, columns,
columns,SparkDataFrame-method,
names,
names,SparkDataFrame-method,
names<-,
names<-,SparkDataFrame-method;
coltypes, coltypes,
coltypes,SparkDataFrame-method,
coltypes<-, coltypes<-,
coltypes<-,SparkDataFrame,character-method;
count,SparkDataFrame-method,
nrow, nrow,
nrow,SparkDataFrame-method;
createOrReplaceTempView,
createOrReplaceTempView,
createOrReplaceTempView,SparkDataFrame,character-method;
crossJoin,
crossJoin,SparkDataFrame,SparkDataFrame-method;
dapplyCollect, dapplyCollect,
dapplyCollect,SparkDataFrame,function-method;
dapply, dapply,
dapply,SparkDataFrame,function,structType-method;
describe, describe,
describe,
describe,SparkDataFrame,ANY-method,
describe,SparkDataFrame,character-method,
describe,SparkDataFrame-method,
summary, summary,
summary,SparkDataFrame-method;
dim,
dim,SparkDataFrame-method;
distinct, distinct,
distinct,SparkDataFrame-method,
unique,
unique,SparkDataFrame-method;
dropDuplicates,
dropDuplicates,
dropDuplicates,SparkDataFrame-method;
dropna, dropna,
dropna,SparkDataFrame-method,
fillna, fillna,
fillna,SparkDataFrame-method,
na.omit, na.omit,
na.omit,SparkDataFrame-method;
drop, drop,
drop, drop,ANY-method,
drop,SparkDataFrame-method;
dtypes, dtypes,
dtypes,SparkDataFrame-method;
except, except,
except,SparkDataFrame,SparkDataFrame-method;
explain, explain,
explain,
explain,SparkDataFrame-method,
explain,StreamingQuery-method;
filter, filter,
filter,SparkDataFrame,characterOrColumn-method,
where, where,
where,SparkDataFrame,characterOrColumn-method;
first, first,
first,
first,SparkDataFrame-method,
first,characterOrColumn-method;
gapplyCollect, gapplyCollect,
gapplyCollect,
gapplyCollect,GroupedData-method,
gapplyCollect,SparkDataFrame-method;
gapply, gapply,
gapply,
gapply,GroupedData-method,
gapply,SparkDataFrame-method;
getNumPartitions,
getNumPartitions,SparkDataFrame-method;
groupBy, groupBy,
groupBy,SparkDataFrame-method,
group_by, group_by,
group_by,SparkDataFrame-method;
head,
head,SparkDataFrame-method;
hint, hint,
hint,SparkDataFrame,character-method;
histogram,
histogram,SparkDataFrame,characterOrColumn-method;
insertInto, insertInto,
insertInto,SparkDataFrame,character-method;
intersect, intersect,
intersect,SparkDataFrame,SparkDataFrame-method;
isLocal, isLocal,
isLocal,SparkDataFrame-method;
isStreaming, isStreaming,
isStreaming,SparkDataFrame-method;
join,
join,SparkDataFrame,SparkDataFrame-method;
limit, limit,
limit,SparkDataFrame,numeric-method;
merge, merge,
merge,SparkDataFrame,SparkDataFrame-method;
mutate, mutate,
mutate,SparkDataFrame-method,
transform, transform,
transform,SparkDataFrame-method;
ncol,
ncol,SparkDataFrame-method;
persist, persist,
persist,SparkDataFrame,character-method;
printSchema, printSchema,
printSchema,SparkDataFrame-method;
randomSplit, randomSplit,
randomSplit,SparkDataFrame,numeric-method;
rbind, rbind,
rbind,SparkDataFrame-method;
registerTempTable,
registerTempTable,
registerTempTable,SparkDataFrame,character-method;
rename, rename,
rename,SparkDataFrame-method,
withColumnRenamed,
withColumnRenamed,
withColumnRenamed,SparkDataFrame,character,character-method;
repartition, repartition,
repartition,SparkDataFrame-method;
sample, sample,
sample,SparkDataFrame,logical,numeric-method,
sample_frac, sample_frac,
sample_frac,SparkDataFrame,logical,numeric-method;
saveAsParquetFile,
saveAsParquetFile,
saveAsParquetFile,SparkDataFrame,character-method,
write.parquet, write.parquet,
write.parquet,SparkDataFrame,character-method;
saveAsTable, saveAsTable,
saveAsTable,SparkDataFrame,character-method;
saveDF, saveDF,
saveDF,SparkDataFrame,character-method,
write.df, write.df,
write.df,
write.df,SparkDataFrame-method;
schema, schema,
schema,SparkDataFrame-method;
selectExpr, selectExpr,
selectExpr,SparkDataFrame,character-method;
showDF, showDF,
showDF,SparkDataFrame-method;
show, show,
show,Column-method,
show,GroupedData-method,
show,SparkDataFrame-method,
show,StreamingQuery-method,
show,WindowSpec-method;
storageLevel,
storageLevel,SparkDataFrame-method;
str,
str,SparkDataFrame-method;
take, take,
take,SparkDataFrame,numeric-method;
toJSON,
toJSON,SparkDataFrame-method;
union, union,
union,SparkDataFrame,SparkDataFrame-method,
unionAll, unionAll,
unionAll,SparkDataFrame,SparkDataFrame-method;
unpersist, unpersist,
unpersist,SparkDataFrame-method;
withColumn, withColumn,
withColumn,SparkDataFrame,character-method;
with,
with,SparkDataFrame-method;
write.jdbc, write.jdbc,
write.jdbc,SparkDataFrame,character,character-method;
write.json, write.json,
write.json,SparkDataFrame,character-method;
write.orc, write.orc,
write.orc,SparkDataFrame,character-method;
write.stream, write.stream,
write.stream,SparkDataFrame-method;
write.text, write.text,
write.text,SparkDataFrame,character-method
Other normal_funcs: Column-class,
column, column,
column,
column,character-method,
column,jobj-method; abs,
abs,Column-method;
bitwiseNOT, bitwiseNOT,
bitwiseNOT,Column-method;
expr, expr,
expr,character-method;
from_json, from_json,
from_json,Column,structType-method;
greatest, greatest,
greatest,Column-method;
ifelse, ifelse,Column-method;
is.nan, is.nan,Column-method,
isnan, isnan,
isnan,Column-method; least,
least, least,Column-method;
lit, lit,
lit,ANY-method; nanvl,
nanvl, nanvl,Column-method;
negate, negate,
negate,Column-method; randn,
randn, randn,
randn,missing-method,
randn,numeric-method; rand,
rand, rand,
rand,missing-method,
rand,numeric-method; struct,
struct,
struct,characterOrColumn-method;
to_json, to_json,
to_json,Column-method; when,
when, when,Column-method
## Not run:
##D sparkR.session()
##D path <- "path/to/file.json"
##D df <- read.json(path)
##D newDF <- coalesce(df, 1L)
## End(Not run)
## Not run: coalesce(df$c, df$d, df$e)