Answers for "find duplicate rows in pandas and drop that row based on some condition from another column"

0

Return a new DataFrame with duplicate rows removed

# Return a new DataFrame with duplicate rows removed

from pyspark.sql import Row
df = sc.parallelize([
  Row(name='Alice', age=5, height=80),
  Row(name='Alice', age=5, height=80),
  Row(name='Alice', age=10, height=80)]).toDF()
df.dropDuplicates().show()
# +---+------+-----+
# |age|height| name|
# +---+------+-----+
# |  5|    80|Alice|
# | 10|    80|Alice|
# +---+------+-----+

df.dropDuplicates(['name', 'height']).show()
# +---+------+-----+
# |age|height| name|
# +---+------+-----+
# |  5|    80|Alice|
# +---+------+-----+
Posted by: Guest on April-08-2020

Code answers related to "find duplicate rows in pandas and drop that row based on some condition from another column"

Python Answers by Framework

Browse Popular Code Answers by Language